Yangyang Xu

ORCID: 0000-0002-4163-3723
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Sparse and Compressive Sensing Techniques
  • Stochastic Gradient Optimization Techniques
  • Advanced Optimization Algorithms Research
  • Tensor decomposition and applications
  • Matrix Theory and Algorithms
  • Face and Expression Recognition
  • Optimization and Variational Analysis
  • Blind Source Separation Techniques
  • Risk and Portfolio Optimization
  • Image and Signal Denoising Methods
  • Seismic Imaging and Inversion Techniques
  • Privacy-Preserving Technologies in Data
  • Complexity and Algorithms in Graphs
  • Distributed Control Multi-Agent Systems
  • Elasticity and Material Modeling
  • Machine Learning and ELM
  • Domain Adaptation and Few-Shot Learning
  • Photoacoustic and Ultrasonic Imaging
  • Wireless Communication Networks Research
  • Advanced Adaptive Filtering Techniques
  • Seismic Waves and Analysis
  • Statistical Methods and Inference
  • Video Coding and Compression Technologies
  • Microwave Imaging and Scattering Analysis
  • Mobile Ad Hoc Networks

Rensselaer Polytechnic Institute
2015-2024

Lanzhou Jiaotong University
2019-2024

Beijing Advanced Sciences and Innovation Center
2023-2024

Beihang University
2024

Jilin Province Science and Technology Department
2020-2024

Jilin University
2020-2024

Macau University of Science and Technology
2024

Huazhong University of Science and Technology
2019-2020

Jilin Medical University
2020

Lanzhou University of Finance and Economics
2020

This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each of variables. It also accepts blocks requires these to be updated by proximal minimization. We review some interesting applications propose a generalized coordinate descent method. Under certain conditions, we show that any limit point satisfies Nash equilibrium conditions. Furthermore, establish global convergence estimate asymptotic rate...

10.1137/120887795 article EN SIAM Journal on Imaging Sciences 2013-01-01

In this paper, we first study $\ell_q$ minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, focus on unconstrained minimization, which show a few advantages noisy measurements and/or approximately Inspired by the results in [Daubechies et al., Comm. Pure Appl. Math., 63 (2010), pp. 1--38] constrained start with preliminary yet novel analysis includes convergence, error bound, local convergence behavior. Then, are extended to...

10.1137/110840364 article EN SIAM Journal on Numerical Analysis 2013-01-01

Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic recon- struction, and so on. We propose a new model to recover tensor by simultaneously performing matrix factorizations the all-mode ma- tricizations of underlying tensor. An alternating minimization algorithm is applied solve model, along with two adaptive rank-adjusting strategies when exact rank not known. Phase transition plots reveal that our can variety...

10.3934/ipi.2015.9.601 article EN cc-by Inverse Problems and Imaging 2015-01-01

Finding a fixed point to nonexpansive operator, i.e., $x^*=Tx^*$, abstracts many problems in numerical linear algebra, optimization, and other areas of scientific computing. To solve fixed-point problems, we propose ARock, an algorithmic framework which multiple agents (machines, processors, or cores) update $x$ asynchronous parallel fashion. Asynchrony is crucial computing since it reduces synchronization wait, relaxes communication bottleneck, thus speeds up significantly. At each step...

10.1137/15m1024950 article EN SIAM Journal on Scientific Computing 2016-01-01

The stochastic gradient (SG) method can quickly solve a problem with large number of components in the objective, or optimization problem, to moderate accuracy. block coordinate descent/update (BCD) method, on other hand, problems multiple (blocks of) variables. This paper introduces that combines great features SG and BCD for many objective proposes (BSG) both convex nonconvex programs. BSG generalizes by updating all blocks variables Gauss--Seidel type (updating current depends previously...

10.1137/140983938 article EN SIAM Journal on Optimization 2015-01-01

10.1007/s12532-014-0074-y article EN Mathematical Programming Computation 2014-05-19

Motivated by big data applications, first-order methods have been extremely popular in recent years. However, naive gradient generally converge slowly. Hence, much effort has made to accelerate various methods. This paper proposes two accelerated towards solving structured linearly constrained convex programming, for which we assume composite objective that is the sum of a differentiable function and possibly nondifferentiable one. The first method linearized augmented Lagrangian (LALM). At...

10.1137/16m1082305 article EN SIAM Journal on Optimization 2017-01-01

This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets.They decompose a problem into simple subproblems, where each updates one, small block of, variables while fixing others.These methods can deal with linear and nonlinear mappings, smooth nonsmooth functions, as well convex nonconvex problems.In addition, they easy to parallelize.The great performance of depends subproblems.To derive subproblems several new...

10.4310/amsa.2016.v1.n1.a2 article EN Annals of Mathematical Sciences and Applications 2016-01-01

This paper introduces algorithms for the decentralized low-rank matrix completion problem. Assume a W = [W <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> ,W xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> , ...,W xmlns:xlink="http://www.w3.org/1999/xlink">L</sub> ]. In network, each agent ℓ observes some entries of xmlns:xlink="http://www.w3.org/1999/xlink">ℓ</sub> . order to recover unobserved via computation, we factorize unknown...

10.1109/icassp.2012.6288528 article EN 2012-03-01

This monograph presents a class of algorithms called coordinate descent for mathematicians, statisticians, and engineers outside the field optimization. particular has recently gained popularity due to their effectiveness in solving large-scale optimization problems machine learning, compressed sensing, image processing, computational statistics. Coordinate solve by successively minimizing along each or hyperplane, which is ideal parallelized distributed computing. Avoiding detailed...

10.48550/arxiv.1610.00040 preprint EN other-oa arXiv (Cornell University) 2016-01-01

Many real-world problems, such as those with fairness constraints, involve complex expectation constraints and large datasets, necessitating the design of efficient stochastic methods to solve them. Most existing research focuses on cases no {constraint} or easy-to-project deterministic constraints. In this paper, we consider nonconvex nonsmooth optimization problems for which build a novel exact penalty model. We first show relationship between model original problem. Then solving problem,...

10.48550/arxiv.2501.19214 preprint EN arXiv (Cornell University) 2025-01-31

Many dictionary based methods in image processing use to represent all the patches of an image. We address open issue modeling by its overlapping patches: due overlapping, there are a large number patches, and recover these one must determine excessive their coefficients. With very few exceptions, this has limited applications image-patch ``local'' tasks such as denoising, inpainting, cartoon-texture decomposition, super-resolution, deblurring, where can process at time. Our focus is global...

10.3934/ipi.2016012 article EN cc-by Inverse Problems and Imaging 2016-05-01

High-dimensional data contain not only redundancy but also noises produced by the sensors. These are usually non-Gaussian distributed. The metrics based on Euclidean distance suitable for these situations in general. In order to select useful features and combat adverse effects of simultaneously, a robust sparse subspace learning method unsupervised scenario is proposed this paper maximum correntropy criterion that shows strong robustness against outliers. Furthermore, an iterative strategy...

10.1109/tcsvt.2017.2783364 article EN publisher-specific-oa IEEE Transactions on Circuits and Systems for Video Technology 2017-12-14

Nonconvex optimization problems arise in many areas of computational science and engineering are (approximately) solved by a variety algorithms. Existing algorithms usually only have local convergence or subsequence their iterates. We propose an algorithm for generic nonconvex formulation, establish the its whole iterate sequence to critical point along with rate convergence, numerically demonstrate efficiency. Specially, we consider problem minimizing objective function. Its variables can...

10.48550/arxiv.1410.1386 preprint EN other-oa arXiv (Cornell University) 2014-01-01

10.1007/s12532-018-0148-3 article EN Mathematical Programming Computation 2018-09-20

The stochastic gradient method (SGM) has been popularly applied to solve optimization problems with an objective that is or average of many functions. Most existing works on SGMs assume the underlying problem unconstrained easy-to-project constraint set. In this paper, we consider have a and also functional constraints. For such problems, it could be extremely expensive project point feasible set, even compute subgradient and/or function value all To find solutions these propose novel...

10.1137/18m1229869 article EN SIAM Journal on Optimization 2020-01-01
Coming Soon ...