Improving the Transferability of Adversarial Examples by Feature Augmentation
Transferability
Feature (linguistics)
DOI:
10.48550/arxiv.2407.06714
Publication Date:
2024-07-09
AUTHORS (6)
ABSTRACT
Despite the success of input transformation-based attacks on boosting adversarial transferability, performance is unsatisfying due to ignorance discrepancy across models. In this paper, we propose a simple but effective feature augmentation attack (FAUG) method, which improves transferability without introducing extra computation costs. Specifically, inject random noise into intermediate features model enlarge diversity gradient, thereby mitigating risk overfitting specific and notably amplifying transferability. Moreover, our method can be combined with existing gradient augment their further. Extensive experiments conducted ImageNet dataset CNN transformer models corroborate efficacy e.g., achieve improvement +26.22% +5.57% combination methods, respectively.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....