Persistent Classification: A New Approach to Stability of Data and Adversarial Examples

FOS: Computer and information sciences Computer Science - Machine Learning Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2404.08069 Publication Date: 2024-04-11
ABSTRACT
There are a number of hypotheses underlying the existence adversarial examples for classification problems. These include high-dimensionality data, high codimension in ambient space data manifolds interest, and that structure machine learning models may encourage classifiers to develop decision boundaries close points. This article proposes new framework studying does not depend directly on distance boundary. Similarly smoothed classifier literature, we define (natural or adversarial) point be $(\gamma,\sigma)$-stable if probability same is at least $\gamma$ points sampled Gaussian neighborhood with given standard deviation $\sigma$. We focus differences between persistence metrics along interpolants natural show have significantly lower than large neural networks context MNIST ImageNet datasets. connect this lack boundary geometry by measuring angles respect boundaries. Finally, approach robustness developing manifold alignment gradient metric demonstrating increase can achieved when training addition metric.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....