- Sparse and Compressive Sensing Techniques
- Numerical methods in inverse problems
- Statistical Methods and Inference
- Photoacoustic and Ultrasonic Imaging
- Advanced Graph Neural Networks
- Medical Imaging Techniques and Applications
- Image and Signal Denoising Methods
- Stochastic Gradient Optimization Techniques
- Advanced Optimization Algorithms Research
- Machine Learning and Algorithms
- Machine Learning and ELM
- Neural Networks and Applications
- Markov Chains and Monte Carlo Methods
- Graph theory and applications
- Risk and Portfolio Optimization
- Optimization and Variational Analysis
- Control Systems and Identification
- Systemic Lupus Erythematosus Research
- Soil Geostatistics and Mapping
- Advanced Causal Inference Techniques
- Distributed Sensor Networks and Detection Algorithms
- Random Matrices and Applications
- Point processes and geometric inequalities
- Advanced Image Processing Techniques
- Complex Network Analysis Techniques
Centre National de la Recherche Scientifique
2016-2025
Laboratoire Jean-Alexandre Dieudonné
2021-2025
Université Côte d'Azur
2021-2025
Observatoire de la Côte d’Azur
2023
Institut de Mathématiques de Bourgogne
2016-2021
Université de Bourgogne
2017-2021
Institut de Mathématiques de Bordeaux
2017-2020
Centre de Recherche en Mathématiques de la Décision
2011-2017
Centre National pour la Recherche Scientifique et Technique (CNRST)
2017
Université Paris Dauphine-PSL
2012-2016
This paper investigates the theoretical guarantees of ℓ 1 -analysis regularization when solving linear inverse problems.Most previous works in literature have mainly focused on sparse synthesis prior where sparsity is measured as norm coefficients that synthesize signal from a given dictionary.In contrast, more general analysis minimizes correlations between and atoms dictionary, these define support.The corresponding variational problem encompasses several well-known regularizations such...
Algorithms for solving variational regularization of ill-posed inverse problems usually involve operators that depend on a collection continuous parameters. When the enjoy some (local) regularity, these parameters can be selected using so-called Stein Unbiased Risk Estimator (SURE). While this selection is performed by an exhaustive search, we address in work problem SURE to efficiently optimize model. considering nonsmooth regularizers, such as popular $\ell_1$-norm corresponding...
This paper studies least-square regression penalized with partly smooth convex regularizers. class of penalty functions is very large and versatile, allows to promote solutions conforming some notion low complexity. Indeed, such penalties/regularizers force the corresponding belong a low-dimensional manifold (the so-called model), which remains stable when argument function undergoes small perturbations. Such good sensitivity property crucial make underlying low-complexity (manifold) model...
Journal Article Model selection with low complexity priors Get access Samuel Vaiter, Vaiter † CNRS, CEREMADE, Université Paris-Dauphine, Paris Cedex 16, France †Corresponding author: Email: vaiter@ceremade.dauphine.frgolbabaee@ceremade.dauphine.fr Search for other works by this author on: Oxford Academic Google Scholar Mohammad Golbabaee, Golbabaee Jalal Fadili, Fadili GREYC, CNRS-ENSICAEN-Université de Caen, FranceJalal.Fadili@greyc.ensicaen.fr Gabriel Peyré Francepeyre@ceremade.dauphine.fr...
In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with special focus on image processing tasks. Generalizing ideas that emerged for $\ell_1$ regularization, develop an approach refitting results standard methods toward input data. Total variation regularizations and nonlocal means are cases interest. We identify important covariant information should be preserved by method emphasize importance preserving Jacobian...
Setting regularization parameters for Lasso-type estimators is notoriously difficult, though crucial in practice. The most popular hyperparameter optimization approach grid-search using held-out validation data. Grid-search however requires to choose a predefined grid each parameter, which scales exponentially the number of parameters. Another cast as bi-level problem, one can solve by gradient descent. key challenge these methods estimation with respect hyperparameters. Computing this via...
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus their expressive power. Existing analyses relate this notion graph isomorphism problem, which is mostly relevant for graphs small sizes, or studied classification regression tasks, while prediction tasks nodes are far more graphs. Recently, several works showed that, very general random models, GNNs converge certains functions as number grows. In paper, we provide complete and...
We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models random graphs, where nodes are represented latent variables and edges drawn according to a similarity kernel. This allows us overcome the difficulties dealing with discrete notions such as isomorphisms very large considering instead more natural geometric aspects. first convergence GCNs continuous counterpart number grows. Our results fully non-asymptotic valid for relatively sparse...
In this paper, we propose a rigorous derivation of the expression projected Generalized Stein Unbiased Risk Estimator (GSURE) for estimation (projected) risk associated to regularized ill-posed linear inverse problems using sparsity-promoting ℓ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> penalty. The GSURE is an unbiased estimator recovery on vector orthogonal degradation operator kernel. Our framework can handle many well-known...
In this paper, we investigate in a unified way the structural properties of solutions to inverse problems. These are regularized by generic class semi-norms defined as decomposable norm composed with linear operator, so-called analysis type prior. This encompasses several well-known analysis-type regularizations such discrete total variation (in any dimension), group-Lasso or nuclear norm. Our main results establish sufficient conditions under which uniqueness and stability bounded noise...
This paper develops a novel framework to compute projected Generalized Stein Unbiased Risk Estimator (GSURE) for wide class of sparsely regularized solutions inverse problems. includes arbitrary convex data fidelities with both analysis and synthesis mixed ℓ1 − ℓ2 norms. The GSURE necessitates the (weak) derivative solution w.r.t. observations. However, as is not available in analytical form but rather through iterative schemes such proximal splitting, we propose iteratively by...
In this paper, we analyze classical variants of the Spectral Clustering (SC) algorithm in Dynamic Stochastic Block Model (DSBM). Existing results show that, relatively sparse case where expected degree grows logarithmically with number nodes, guarantees static can be extended to dynamic and yield improved error bounds when DSBM is sufficiently smooth time, that is, communities do not change too much between two time steps. We improve over these by drawing a new link sparsity smoothness DSBM:...
Abstract We consider the problem of recovering elements a low-dimensional model from under-determined linear measurements. To perform recovery, we minimization convex regularizer subject to data fit constraint. Given model, ask ourselves what is ‘best’ its recovery. answer this question, define an optimal as function that maximizes compliance measure with respect model. introduce and study several notions compliance. give analytical expressions for measures based on best-known recovery...