Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective

Overfitting Robustness Trainer
DOI: 10.48550/arxiv.2310.19360 Publication Date: 2023-01-01
ABSTRACT
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features. However, researchers recently notice that AT suffers from severe overfitting problems, particularly after learning rate (LR) decay. In this paper, we explain phenomenon by viewing adversarial training as a dynamic minimax game between model trainer and attacker. Specifically, analyze how LR decay breaks balance empowering with stronger memorization ability, show such imbalance induces result of memorizing non-robust We validate understanding extensive experiments, provide holistic view dynamics both two players. This further inspires us to alleviate rebalancing players either regularizing trainer's capacity or improving attack strength. Experiments proposed ReBalanced (ReBAT) can attain good robustness does not suffer even very long training. Code is available at https://github.com/PKU-ML/ReBAT.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....