Shayan Aziznejad

ORCID: 0000-0002-6871-9703
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Sparse and Compressive Sensing Techniques
  • Image and Signal Denoising Methods
  • Advanced Numerical Analysis Techniques
  • Mathematical Analysis and Transform Methods
  • Photoacoustic and Ultrasonic Imaging
  • Domain Adaptation and Few-Shot Learning
  • Numerical methods in inverse problems
  • Control Systems and Identification
  • Electrical and Bioimpedance Tomography
  • Neural Networks and Applications
  • Medical Imaging Techniques and Applications
  • Mathematical Inequalities and Applications
  • Digital Filter Design and Implementation
  • Machine Learning and ELM
  • Model Reduction and Neural Networks
  • Matrix Theory and Algorithms
  • Advanced machining processes and optimization
  • Image and Object Detection Techniques
  • Stochastic processes and financial applications
  • Cell Image Analysis Techniques
  • Advanced Harmonic Analysis Research
  • Advanced Biosensing Techniques and Applications
  • Medical Image Segmentation Techniques
  • Stochastic Gradient Optimization Techniques
  • Advanced Fluorescence Microscopy Techniques

École Polytechnique Fédérale de Lausanne
2018-2023

Centre d'Imagerie BioMedicale
2019-2023

École Polytechnique
2021-2022

Laboratoire d’Imagerie Biomédicale
2021

ORCID
2021

Sharif University of Technology
2017

We develop an efficient computational solution to train deep neural networks (DNN) with free-form activation functions. To make the problem well-posed, we augment cost functional of DNN by adding appropriate shape regularization: sum second-order total-variations trainable nonlinearities. The representer theorem for DNNs tells us that optimal functions are adaptive piecewise-linear splines, which allows recast as a parametric optimization. challenging point is corresponding basis (ReLUs)...

10.1109/ojsp.2020.3039379 article EN cc-by IEEE Open Journal of Signal Processing 2020-01-01

We introduce a variational framework to learn the activation functions of deep neural networks. Our aim is increase capacity network while controlling an upper-bound actual Lipschitz constant input-output relation. To that end, we first establish global bound for Based on obtained bound, then formulate problem learning functions. infinite-dimensional and not computationally tractable. However, prove there always exists solution has continuous piecewise-linear (linear-spline) activations....

10.1109/tsp.2020.3014611 article EN IEEE Transactions on Signal Processing 2020-01-01

We study one-dimensional continuous-domain inverse problems with multiple generalized total-variation regularization, which involves the joint use of several regularization operators. Our starting point is a new representer theorem that states such have hybrid-spline solutions total sparsity bounded by number measurements. show can be discretized in an exact way using union B-spline dictionary bases matched to then propose multiresolution algorithm selects appropriate grid size depends on...

10.1109/tsp.2019.2944754 article EN IEEE Transactions on Signal Processing 2019-10-04

Abstract In this paper, we characterize the class of extremal points unit ball Hessian–Schatten total variation (HTV) functional. The underlying motivation for our work stems from a general representer theorem that characterizes solution set regularized linear inverse problems in terms regularization ball. Our analysis is mainly based on studying continuous and piecewise (CPWL) functions. particular, show dimension $$d=2$$ <mml:math...

10.1007/s00526-023-02611-6 article EN cc-by Calculus of Variations and Partial Differential Equations 2023-11-20

We characterize the solution of a broad class convex optimization problems that address reconstruction function from finite number linear measurements. The underlying hypothesis is decomposable as sum components, where each component belongs to its own prescribed Banach space; moreover, problem regularized by penalizing some composite norm solution. establish general conditions for existence and derive generic parametric representation components. These representations fall into three...

10.1016/j.acha.2021.07.002 article EN cc-by-nc-nd Applied and Computational Harmonic Analysis 2021-07-28

We characterize the local smoothness and asymptotic growth rate of Lévy white noise. do so by characterizing weighted Besov spaces in which it is located. extend known results two ways. First, we obtain new bounds for via Blumenthal-Getoor indices also deduce critical when coincide, true symmetric-alpha-stable, compound Poisson, symmetric-gamma noises to name a few. Second, express terms moment properties Previous analyses only provided lower both rate. Showing sharpness these requires us...

10.48550/arxiv.1801.09245 preprint EN other-oa arXiv (Cornell University) 2018-01-01

In this paper, we formally investigate two mathematical aspects of Hermite splines that are relevant to practical applications. We first demonstrate maximally localized, in the sense size their support is minimal among pairs functions with identical reproduction properties. Then, precisely quantify approximation power for reconstruction and derivatives. It known B-spline schemes have same order. More precisely, error vanishes as O(T4) when step T goes zero. work, show they actually...

10.1016/j.cam.2019.112503 article EN cc-by Journal of Computational and Applied Mathematics 2019-10-11

We characterize the local smoothness and asymptotic growth rate of Lévy white noise. do so by characterizing weighted Besov spaces in which it is located. extend known results two ways. First, we obtain new bounds for via Blumenthal-Getoor indices also deduce critical when coincide, true symmetric-$\alpha $-stable, compound Poisson, symmetric-gamma noises to name a few. Second, express terms moment properties Previous analyses only provided lower both rate. Showing sharpness these requires...

10.1214/20-ejp554 article EN cc-by Electronic Journal of Probability 2020-01-01

In this paper, we provide a Banach-space formulation of supervised learning with generalized total- variation (gTV) regularization. We identify the class kernel functions that are admissible in framework. Then, propose continuous-domain hybrid search space gTV show solution admits multikernel expansion adaptive positions. representation, number active kernels is upper-bounded by data points while regularization imposes an $\ell_1$ penalty on coefficients. Finally, illustrate numerically...

10.1137/20m1318882 article EN SIAM Journal on Mathematics of Data Science 2021-01-01

In this paper, we introduce the Hessian-Schatten total variation (HTV)---a novel seminorm that quantifies ``rugosity"" of multivariate functions.Our motivation for defining HTV is to assess complexity supervised-learning schemes.We start by specifying adequate matrixvalued Banach spaces are equipped with suitable classes mixed norms.We then show invariant rotations, scalings, and translations.Additionally, its minimum value achieved linear mappings, which supports common intuition regression...

10.1137/22m147517x article EN SIAM Journal on Mathematics of Data Science 2023-06-01

We formulate as an inverse problem the construction of sparse parametric continuous curve models that fit a sequence contour points. Our prior is incorporated regularization term encourages rotation invariance and sparsity. prove optimal solution to closed with spline components. then show how efficiently solve task using B-splines basis functions. extend our formulation curves made two distinct components complementary smoothness properties it hybrid splines. illustrate performance model on...

10.1109/tip.2022.3187286 article EN IEEE Transactions on Image Processing 2022-01-01

We develop a novel 2D functional learning framework that employs sparsity-promoting regularization based on second-order derivatives. Motivated by the nature of regularizer, we restrict search space to span piecewise-linear box splines shifted lattice. Our formulation infinite-dimensional problem this allows us recast it exactly as finite-dimensional one can be solved using standard methods in convex optimization. Since our is composed continuous and functions, work presents itself an...

10.1109/ojsp.2021.3136488 article EN cc-by IEEE Open Journal of Signal Processing 2021-12-17

In this paper, we fully characterize the duality mapping over space of matrices that are equipped with Schatten norms. Our approach is based on analysis saturation Hölder inequality for We prove in our main result that, p∈(1,∞), real-valued Schatten-p norm a continuous and single-valued function provide an explicit form its computation. For special case p = 1, set-valued; by adding rank constraint, show it can be reduced to Borel-measurable which also closed-form expression.

10.1080/01630563.2021.1922438 article EN cc-by-nc-nd Numerical Functional Analysis and Optimization 2021-04-26

We provide an algorithm to generate trajectories of sparse stochastic processes that are solutions linear ordinary differential equations driven by Lévy white noises. A recent paper showed these limits in law generalized compound-Poisson processes. Based on this result, we derive off-the-grid generates arbitrarily close approximations the target process. Our method relies a B-spline representation illustrate numerically validity our approach.

10.1109/tsp.2020.3011632 article EN IEEE Transactions on Signal Processing 2020-01-01

The motivation for this work is to improve the performance of deep neural networks through optimization individual activation functions. Since latter results in an infinite- dimensional problem, we resolve ambiguity by searching sparsest and most regular solution sense Lipschitz. To that end, first introduce a bound relates properties pointwise nonlinearities global Lipschitz constant network. By using proposed as regularizer, then derive representer theorem shows optimum configuration...

10.1109/icassp.2019.8682547 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019-04-17

We investigate the problem of index coding, where a sender transmits distinct packets over shared link to multiple users with side information. The aim is find an encoding scheme (linear combinations) minimize number transmitted packets, while providing each user sufficient amount data for recovery desired parts. It has been shown that finding optimal linear code equivalent matrix completion problem, observed elements indicate information available users. This modeling results in incomplete...

10.23919/eusipco.2017.8081682 article EN 2021 29th European Signal Processing Conference (EUSIPCO) 2017-08-01

The goal of derivative sampling is to reconstruct a signal from the samples function and its first-order derivative. In this paper, we consider problem over shift-invariant reconstruction subspace generated by two compact-support functions. We assume that reproduces polynomials up certain degree. then derive lower bound on sum supports generators. Finally, illustrate tightness our with some examples.

10.1109/sampta45681.2019.9030990 article EN 2019-07-01

Beside the minimizationof prediction error, two of most desirable properties a regression scheme are <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">stability</i> and xmlns:xlink="http://www.w3.org/1999/xlink">interpretability</i> . Driven by these principles, we propose continuous-domain formulations for one-dimensional problems. In our first approach, use Lipschitz constant as regularizer, which results in an implicit tuning overall...

10.1109/ojsp.2022.3157082 article EN cc-by IEEE Open Journal of Signal Processing 2022-01-01

ABSTRACT We propose a novel method for the clustering of point-cloud data that originate from single-molecule localization microscopy (SMLM). Our scheme has ability to infer hierarchical structure data. It takes particular relevance when quantitatively analyzing biological particles interest at different scales. assumes prior neither on shape nor background noise. multiscale pipeline is built upon graph theory. At each scale, we first construct weighted represents SMLM Next, find clusters...

10.1101/2020.12.22.423931 preprint EN cc-by bioRxiv (Cold Spring Harbor Laboratory) 2020-12-22

In this paper, we precisely quantify the wavelet compressibility of compound Poisson processes.To that end, expand given random process over Haar basis and analyse its asymptotic approximation properties.By only considering nonzero coefficients up to a scale, what call greedy approximation, exploit extreme sparsity expansion derives from piecewise-constant nature processes.More precisely, provide lower upper bounds for mean squared error processes.We are then able deduce has sub-exponential...

10.1109/tit.2021.3139287 article EN IEEE Transactions on Information Theory 2021-12-28

In this paper, we provide a Banach-space formulation of supervised learning with generalized total-variation (gTV) regularization. We identify the class kernel functions that are admissible in framework. Then, propose variation continuous-domain hybrid search space gTV show solution admits multi-kernel expansion adaptive positions. representation, number active kernels is upper-bounded by data points while regularization imposes an $\ell_1$ penalty on coefficients. Finally, illustrate...

10.48550/arxiv.1811.00836 preprint EN other-oa arXiv (Cornell University) 2018-01-01
Coming Soon ...