- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Machine Learning and Algorithms
- Stochastic processes and statistical mechanics
- Explainable Artificial Intelligence (XAI)
- Security in Wireless Sensor Networks
- Mathematical Biology Tumor Growth
- Random Matrices and Applications
- Advanced Neural Network Applications
- Advanced Statistical Methods and Models
- Stochastic processes and financial applications
- Markov Chains and Monte Carlo Methods
- Sparse and Compressive Sensing Techniques
- Statistical Methods and Inference
New York University
2019-2021
Abstract Minimizing an adversarial surrogate risk is a common technique for learning robust classifiers. Prior work showed that convex losses are not statistically consistent in the context – or other words, minimizing sequence of will necessarily minimize classification error. We connect consistency to properties minimizers risk, known as Bayes classifiers . Specifically, under reasonable distributional assumptions, loss iff classifier satisfies certain notion uniqueness.
We provide a uniform upper bound on the minimal drift so that one-per-site frog model $d$-ary tree is recurrent. To do this, we introduce subprocess couples across trees with different degrees. Finding couplings for models nested sequences of graphs known to be difficult. The comes from combining coupling new, simpler proof binary recurrent when sufficiently strong. Additionally, describe between which degree smaller divides larger one. This implies critical has limit as $d$ tends infinity...
Adversarial robustness is an increasingly critical property of classifiers in applications. The design robust algorithms relies on surrogate losses since the optimization adversarial loss with most hypothesis sets NP-hard. But which should be used and when do they benefit from theoretical guarantees? We present extensive study this question, including a detailed analysis H-calibration H-consistency losses. show that, under some general assumptions, convex functions, or supremum-based often...
We propose a new notion of uniqueness for the adversarial Bayes classifier in setting binary classification. Analyzing this produces simple procedure computing all classifiers well-motivated family one dimensional data distributions. This characterization is then leveraged to show that as perturbation radius increases, certain notions regularity improve classifiers. demonstrate with various examples boundary frequently lies near classifier.
Adversarial training is a common technique for learning robust classifiers. Prior work showed that convex surrogate losses are not statistically consistent in the adversarial context -- or other words, minimizing sequence of risk will necessarily minimize classification error. We connect consistency to properties minimizers risk, known as \emph{adversarial Bayes classifiers}. Specifically, under reasonable distributional assumptions, loss iff classifier satisfies certain notion uniqueness.
Adversarial or test time robustness measures the susceptibility of a classifier to perturbations input. While there has been flurry recent work on designing defenses against such perturbations, theory adversarial is not well understood. In order make progress this, we focus problem understanding generalization in settings, via lens Rademacher complexity. We give upper and lower bounds for empirical complexity linear hypotheses with measured $l_r$-norm an arbitrary $r \geq 1$. This...
Adversarial training is one of the most popular methods for robust to adversarial attacks, however, it not well-understood from a theoretical perspective. We prove and existence, regularity, minimax theorems surrogate risks. Our results explain some empirical observations on robustness prior work suggest new directions in algorithm development. Furthermore, our extend previously known existence classification risk
Adversarial robustness is a critical property in variety of modern machine learning applications. While it has been the subject several recent theoretical studies, many important questions related to adversarial are still open. In this work, we study fundamental question regarding Bayes optimality for robustness. We provide general sufficient conditions under which existence optimal classifier can be guaranteed Our results useful tool subsequent surrogate losses and their consistency...
A recent trend in explainable AI research has focused on surrogate modeling, where neural networks are approximated as simpler ML algorithms such kernel machines. second been to utilize functions various explain-by-example or data attribution tasks. In this work, we combine these two trends analyze approximate empirical tangent kernels (eNTK) for attribution. Approximation is critical eNTK analysis due the high computational cost compute eNTK. We define new and perform novel how well...
Robustness to adversarial perturbations is of paramount concern in modern machine learning. One the state-of-the-art methods for training robust classifiers training, which involves minimizing a supremum-based surrogate risk. The statistical consistency risks well understood context standard learning, but not setting. In this paper, we characterize surrogates are consistent distributions absolutely continuous with respect Lebesgue measure binary classification. Furthermore, obtain...