You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

Robustness
DOI: 10.48550/arxiv.1905.00877 Publication Date: 2019-01-01
ABSTRACT
Deep learning achieves state-of-the-art results in many tasks computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations, which raised a serious robustness issue of networks. Adversarial training, typically formulated as robust optimization problem, is an effective way improving the A major drawback existing training algorithms computational overhead generation examples, far greater than network training. This leads unbearable overall cost In this paper, we show cast discrete time differential game. Through analyzing Pontryagin's Maximal Principle (PMP) observe adversary update only coupled with parameters first layer network. inspires us restrict most forward back propagation within during updates. effectively reduces total number full backward one for each group Therefore, refer algorithm YOPO (You Only Propagate Once). Numerical experiments demonstrate achieve comparable defense accuracy approximately 1/5 ~ 1/4 GPU projected gradient descent (PGD) algorithm. Our codes are available at https://https://github.com/a1600012888/YOPO-You-Only-Propagate-Once.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....