- Sparse and Compressive Sensing Techniques
- Advanced Optimization Algorithms Research
- Stochastic Gradient Optimization Techniques
- Optimization and Variational Analysis
- Statistical Methods and Inference
- Image and Signal Denoising Methods
- Blind Source Separation Techniques
- Numerical methods in inverse problems
- Complexity and Algorithms in Graphs
- Advanced Multi-Objective Optimization Algorithms
- Photoacoustic and Ultrasonic Imaging
- Face and Expression Recognition
- Matrix Theory and Algorithms
- Probabilistic and Robust Engineering Design
- Risk and Portfolio Optimization
- Optimal Experimental Design Methods
- Advanced Statistical Methods and Models
- Control Systems and Identification
- Bayesian Modeling and Causal Inference
- Statistical Methods and Bayesian Inference
- Advanced Bandit Algorithms Research
- Machine Learning and ELM
- Indoor and Outdoor Localization Technologies
- Tensor decomposition and applications
- Fuzzy Systems and Optimization
University of Minnesota
2019-2024
Shenzhen University
2023
Xidian University
2023
National Supercomputing Center in Shenzhen
2023
Chinese Academy of Medical Sciences & Peking Union Medical College
2023
Simon Fraser University
2010-2019
Twin Cities Orthopedics
2019
Nanjing University of Posts and Telecommunications
2015
Carnegie Mellon University
2006-2007
University of British Columbia
2007
Summary We introduce a general formulation for dimension reduction and coefficient estimation in the multivariate linear model. argue that many of existing methods are commonly used practice can be formulated this framework have various restrictions. continue to propose new method is more flexible generally applicable. The proposed as novel penalized least squares estimate. penalty we employ matrix's Ky Fan norm. Such encourages sparsity among singular values at same time gives shrinkage...
In this paper we consider sparse approximation problems, that is, general $l_0$ minimization problems with the $l_0$-``norm" of a vector being part constraints or objective function. particular, first study first-order optimality conditions for these problems. We then propose penalty decomposition (PD) methods solving them in which sequence subproblems are solved by block coordinate descent (BCD) method. Under some suitable assumptions, establish any accumulation point generated PD satisfies...
We consider the problem of minimizing sum two convex functions: one is smooth and given by a gradient oracle, other separable over blocks coordinates has simple known structure each block. develop an accelerated randomized proximal coordinate (APCG) method for such composite functions. For strongly functions, our achieves faster linear convergence rates than existing methods. Without strong convexity, enjoys sublinear rates. show how to apply APCG solve regularized empirical risk...
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations demonstrated their superiority over the convex counterparts several learning settings. However, solving non-convex optimization problems associated with remains a big challenge. A commonly used approach is Multi-Stage (MS) relaxation (or DC programming), which relaxes original problem to sequence of problems. This usually not very practical for...
In this paper, we propose an efficient and scalable low rank matrix completion algorithm. The key idea is to extend the orthogonal matching pursuit method from vector case case. We further economic version of our algorithm by introducing a novel weight updating rule reduce time storage complexity. Both versions are computationally inexpensive for each iteration find satisfactory results in few iterations. Another advantage proposed that it has only one tunable parameter, which rank. It easy...
In this paper we first study a smooth optimization approach for solving class of nonsmooth strictly concave maximization problems whose objective functions admit convex minimization reformulations. particular, apply Nesterov's technique [Y. E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543–547; Y. Math. Programming, 103 (2005), 127–152] to their dual counterparts that are problems. It is shown the resulting has ${\cal O}(1/{\sqrt{\epsilon}})$ iteration complexity finding an...
The theory of (tight) wavelet frames has been extensively studied in the past twenty years and they are currently widely used for image restoration other processing analysis problems. success frame based models, including balanced approach approach, is due to their capability sparsely approximating piecewise smooth functions like images. Motivated by we shall propose a $\ell _0$ minimization model, where ânormâ coefficients penalized. We adapt penalty decomposition (PD) method Lu Zhang...
In this paper, we consider the problem of estimating multiple graphical models simultaneously using fused lasso penalty, which encourages adjacent graphs to share similar structures. A motivating example is analysis brain networks Alzheimer's disease neuroimaging data. Specifically, may wish estimate a network for normal controls (NC), patients with mild cognitive impairment (MCI), and (AD). We expect two NC MCI common structures but not be identical each other; similarly AD. The proposed...
In this paper we consider general rank minimization problems with appearing either in the objective function or as a constraint. We first establish that class of special has closed-form solutions. Using result, then propose penalty decomposition (PD) methods for which each subproblem is solved by block coordinate descent method. Under some suitable assumptions, show any accumulation point sequence generated PD satisfies first-order optimality conditions nonlinear reformulation problems....
We consider a class of constrained optimization problems with possibly nonconvex non-Lipschitz objective and convex feasible set being the intersection polyhedron degenerate ellipsoid. Such have wide range applications in data science, where is used for inducing sparsity solutions while constraint models noise tolerance incorporates other prior information fitting. To solve this problems, common approach penalty method. However, there little theory on exact penalization functions. In paper,...
Analytical Target Cascading (ATC) is an effective decomposition approach used for engineering design optimization problems that have hierarchical structures. With ATC, the overall system split into subsystems, which are solved separately and coordinated via target/response consistency constraints. As parallel computing becomes more common, it desirable to separable subproblems in ATC so each subproblem can be concurrently increase computational throughput. In this paper, we first examine...
In the practical business environment, portfolio managers often face business-driven requirements that limit number of constituents in their tracking portfolio. A natural index model is thus to minimize a error measure while enforcing an upper bound on assets this paper we consider such cardinality-constrained model. particular, propose efficient nonmonotone projected gradient (NPG) method for solving problem. At each iteration, usually solves several subproblems. We show subproblem has...
We consider a class of constrained optimization problems where the objective function is sum smooth and nonconvex non-Lipschitz function. Many in sparse portfolio selection, edge preserving image restoration, signal processing can be modelled this form. First, we propose concept Karush--Kuhn--Tucker (KKT) stationary condition for problem show that it necessary optimality under constraint qualification called relaxed constant positive linear dependence (RCPLD) condition, which weaker than...
We consider the problem of minimizing sum two convex functions: one is smooth and given by a gradient oracle, other separable over blocks coordinates has simple known structure each block. develop an accelerated randomized proximal coordinate (APCG) method for such composite functions. For strongly functions, our achieves faster linear convergence rates than existing methods. Without strong convexity, enjoys sublinear rates. show how to apply APCG solve regularized empirical risk...
'Separable' uncertainty sets have been widely used in robust portfolio selection models (e.g. see [E. Erdoğan, D. Goldfarb, and G. Iyengar, Robust management, manuscript, Department of Industrial Engineering Operations Research, Columbia University, New York, 2004; Goldfarb problems, Math. Oper. Res. 28 (2003), pp. 1–38; R.H. Tütüncü M. Koenig, asset allocation, Ann. 132 (2004), 157–187]). For these sets, each type uncertain parameter mean covariance) has its own set. As addressed [Z. Lu, A...