- Sparse and Compressive Sensing Techniques
- Stochastic Gradient Optimization Techniques
- Advanced Optimization Algorithms Research
- Tensor decomposition and applications
- Matrix Theory and Algorithms
- Face and Expression Recognition
- Optimization and Variational Analysis
- Blind Source Separation Techniques
- Risk and Portfolio Optimization
- Image and Signal Denoising Methods
- Seismic Imaging and Inversion Techniques
- Privacy-Preserving Technologies in Data
- Complexity and Algorithms in Graphs
- Distributed Control Multi-Agent Systems
- Elasticity and Material Modeling
- Machine Learning and ELM
- Domain Adaptation and Few-Shot Learning
- Photoacoustic and Ultrasonic Imaging
- Wireless Communication Networks Research
- Advanced Adaptive Filtering Techniques
- Seismic Waves and Analysis
- Statistical Methods and Inference
- Video Coding and Compression Technologies
- Microwave Imaging and Scattering Analysis
- Mobile Ad Hoc Networks
Rensselaer Polytechnic Institute
2015-2024
Lanzhou Jiaotong University
2019-2024
Beijing Advanced Sciences and Innovation Center
2023-2024
Beihang University
2024
Jilin Province Science and Technology Department
2020-2024
Jilin University
2020-2024
Macau University of Science and Technology
2024
Huazhong University of Science and Technology
2019-2020
Jilin Medical University
2020
Lanzhou University of Finance and Economics
2020
This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each of variables. It also accepts blocks requires these to be updated by proximal minimization. We review some interesting applications propose a generalized coordinate descent method. Under certain conditions, we show that any limit point satisfies Nash equilibrium conditions. Furthermore, establish global convergence estimate asymptotic rate...
In this paper, we first study $\ell_q$ minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, focus on unconstrained minimization, which show a few advantages noisy measurements and/or approximately Inspired by the results in [Daubechies et al., Comm. Pure Appl. Math., 63 (2010), pp. 1--38] constrained start with preliminary yet novel analysis includes convergence, error bound, local convergence behavior. Then, are extended to...
Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic recon- struction, and so on. We propose a new model to recover tensor by simultaneously performing matrix factorizations the all-mode ma- tricizations of underlying tensor. An alternating minimization algorithm is applied solve model, along with two adaptive rank-adjusting strategies when exact rank not known. Phase transition plots reveal that our can variety...
Finding a fixed point to nonexpansive operator, i.e., $x^*=Tx^*$, abstracts many problems in numerical linear algebra, optimization, and other areas of scientific computing. To solve fixed-point problems, we propose ARock, an algorithmic framework which multiple agents (machines, processors, or cores) update $x$ asynchronous parallel fashion. Asynchrony is crucial computing since it reduces synchronization wait, relaxes communication bottleneck, thus speeds up significantly. At each step...
The stochastic gradient (SG) method can quickly solve a problem with large number of components in the objective, or optimization problem, to moderate accuracy. block coordinate descent/update (BCD) method, on other hand, problems multiple (blocks of) variables. This paper introduces that combines great features SG and BCD for many objective proposes (BSG) both convex nonconvex programs. BSG generalizes by updating all blocks variables Gauss--Seidel type (updating current depends previously...
Motivated by big data applications, first-order methods have been extremely popular in recent years. However, naive gradient generally converge slowly. Hence, much effort has made to accelerate various methods. This paper proposes two accelerated towards solving structured linearly constrained convex programming, for which we assume composite objective that is the sum of a differentiable function and possibly nondifferentiable one. The first method linearized augmented Lagrangian (LALM). At...
This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets.They decompose a problem into simple subproblems, where each updates one, small block of, variables while fixing others.These methods can deal with linear and nonlinear mappings, smooth nonsmooth functions, as well convex nonconvex problems.In addition, they easy to parallelize.The great performance of depends subproblems.To derive subproblems several new...
This paper introduces algorithms for the decentralized low-rank matrix completion problem. Assume a W = [W <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> ,W xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> , ...,W xmlns:xlink="http://www.w3.org/1999/xlink">L</sub> ]. In network, each agent ℓ observes some entries of xmlns:xlink="http://www.w3.org/1999/xlink">ℓ</sub> . order to recover unobserved via computation, we factorize unknown...
This monograph presents a class of algorithms called coordinate descent for mathematicians, statisticians, and engineers outside the field optimization. particular has recently gained popularity due to their effectiveness in solving large-scale optimization problems machine learning, compressed sensing, image processing, computational statistics. Coordinate solve by successively minimizing along each or hyperplane, which is ideal parallelized distributed computing. Avoiding detailed...
Many real-world problems, such as those with fairness constraints, involve complex expectation constraints and large datasets, necessitating the design of efficient stochastic methods to solve them. Most existing research focuses on cases no {constraint} or easy-to-project deterministic constraints. In this paper, we consider nonconvex nonsmooth optimization problems for which build a novel exact penalty model. We first show relationship between model original problem. Then solving problem,...
Many dictionary based methods in image processing use to represent all the patches of an image. We address open issue modeling by its overlapping patches: due overlapping, there are a large number patches, and recover these one must determine excessive their coefficients. With very few exceptions, this has limited applications image-patch ``local'' tasks such as denoising, inpainting, cartoon-texture decomposition, super-resolution, deblurring, where can process at time. Our focus is global...
High-dimensional data contain not only redundancy but also noises produced by the sensors. These are usually non-Gaussian distributed. The metrics based on Euclidean distance suitable for these situations in general. In order to select useful features and combat adverse effects of simultaneously, a robust sparse subspace learning method unsupervised scenario is proposed this paper maximum correntropy criterion that shows strong robustness against outliers. Furthermore, an iterative strategy...
Nonconvex optimization problems arise in many areas of computational science and engineering are (approximately) solved by a variety algorithms. Existing algorithms usually only have local convergence or subsequence their iterates. We propose an algorithm for generic nonconvex formulation, establish the its whole iterate sequence to critical point along with rate convergence, numerically demonstrate efficiency. Specially, we consider problem minimizing objective function. Its variables can...
The stochastic gradient method (SGM) has been popularly applied to solve optimization problems with an objective that is or average of many functions. Most existing works on SGMs assume the underlying problem unconstrained easy-to-project constraint set. In this paper, we consider have a and also functional constraints. For such problems, it could be extremely expensive project point feasible set, even compute subgradient and/or function value all To find solutions these propose novel...