Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Robustness
DOI:
10.48550/arxiv.1706.03922
Publication Date:
2017-01-01
AUTHORS (3)
ABSTRACT
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is general lack understanding why arise; whether they originate due to inherent properties data or training samples remains ill-understood. In this work, we introduce theoretical framework analogous bias-variance theory for these effects. We use our analyze the robustness canonical non-parametric classifier - k-nearest neighbors. Our analysis shows that its depend critically value k may be inherently non-robust small k, but approaches Bayes Optimal fast-growing k. propose novel modified 1-nearest neighbor classifier, and guarantee in large sample limit. experiments suggest have good even reasonable set sizes.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....