- Advanced Optimization Algorithms Research
- Sparse and Compressive Sensing Techniques
- Iterative Methods for Nonlinear Equations
- Advanced Control Systems Optimization
- Metaheuristic Optimization Algorithms Research
- Optimization and Variational Analysis
- Numerical methods in inverse problems
- Fractional Differential Equations Solutions
- Stochastic Gradient Optimization Techniques
- Matrix Theory and Algorithms
- Advanced Adaptive Filtering Techniques
- Polynomial and algebraic computation
- Control Systems and Identification
- Flow Measurement and Analysis
- Image and Signal Denoising Methods
- Water Systems and Optimization
- Real-time simulation and control systems
- Risk and Portfolio Optimization
- Numerical methods for differential equations
- Acoustic Wave Phenomena Research
- Underwater Acoustics Research
- Hydraulic and Pneumatic Systems
- Evolutionary Algorithms and Applications
- Advanced Multi-Objective Optimization Algorithms
- Blind Source Separation Techniques
Dongguan University of Technology
2012-2023
UCSI University
2021
Hohai University
2018
Software (Spain)
2009
Hunan University
2007
In this paper, by the use of project PRP (Polak–Ribiére–Polyak) conjugate gradient direction, we develop a PRP-based descent method for solving unconstrained optimization problem. The provides sufficient direction objective function. Moreover, if exact line search is used, reduces to standard method. Under suitable conditions, show that with some backtracking or generalized Wolfe-type globally convergent. We also report numerical results and compare performance existing methods. proposed efficient.
The global convergence theory of quasi-Newton methods for optimization problems has well been established. Related work to the globalization nonlinear equations is relatively less. major difficulty in globalizing lies lack efficient line search technique. Recently, there have proposed some derivative-free searches. study taken good progress. In this paper, we summarize recent progress quasi- Newton solving equations.
In this paper, we propose a quasi-Newton method for solving systems of monotone equations. The is combination the Broyden and hyperplane projection method. Under appropriate conditions, prove that proposed globally convergent. Preliminary numerical results show promising.
In this paper, we propose an identification function and develop active set technique for solving the $\ell _1$ optimization problem. Such a has strong ability to accurately identify zero components in neighbourhood of isolated stationary point without strict complementarity conditions. Based on technique, gradient-based method To accelerate algorithm, subspace Barzilai-Borwein steplength exact are developed, respectively. Under appropriate conditions, show that with nonmonotone line search...
An active set truncated Newton method for large-scale bound constrained optimization is proposed. The sets are guessed by an identification technique. search direction consists of two parts: some the components simply defined; other determined method. based on a nonmonotone line technique shown to be globally convergent. Numerical experiments presented using problems in CUTEr test problem library. numerical performance reveals that our effective and competitive with famous algorithm TRON.
In this paper, we develop an active set identification technique for solving ℓ1 optimization problems. Such a has strong ability to accurately identify the zero components in neighbourhood of optimal solution. Based on technique, propose conjugate gradient algorithm Under appropriate conditions, show that method is globally convergent. To accelerate algorithm, subspace exact steplength and preconditioned strategy are proposed integrated with solve well-known ℓ2−ℓ1 Numerical experiments...
In the paper, we present an algorithm framework for more general problem of minimizingthe sum $f(x)+\psi(x)$, where $f$ is smooth and $\psi$is convex, but possible nonsmooth.At each step, search direction obtained by solving optimizationproblem involving a quadratic term with diagonal Hessian Barzilai-Borwein steplength plus $ \psi(x)$.The nonmonotone strategy combined -Borwein to accelerate convergence process.The method nomonotone line techniques showed tobe globally convergent.In...
In the paper, we investigate a linear constraint optimization reformulation to more general form of l_1 regularization problem and give some good properties it. We first show that equivalence between problem. Second, KKT point always exists since constraints are linear; half must be active at any point. addition, points same as stationary Based on problem, propose nonomotone spectral gradient method establish its global convergence. Numerical experiments with compressive sense problems our...
Abstract In this article, we first propose a feasible steepest descent direction for box-constrained optimization. By the use of and recently developed modified PRP method, subspace method Under appropriate conditions, show that is globally convergent. Numerical experiments are presented using problems in CUTEr test problem libraries. Keywords: Box-constrained optimizationGlobal convergencePRP methodAMS Subject Classification: 90C0690C2565Y2094A08 ACKNOWLEDGMENTS This work was supported by...
Different variants of particle swarm optimization (PSO) algorithms were introduced in recent years with various improvements to tackle different types problems more robustly. However, the conventional initialization scheme tends generate an initial population relatively inferior solution due random guess mechanism. In this paper, a PSO variant known as modified chaotic is solve unconstrained global effectively, by generating promising population. Experimental studies are conducted assess and...
In this article, we provide two optimal property of MCP regularization optimization. One shows that the support set a local minimizer corresponds to linearly independent columns <inline-formula> <tex-math notation="LaTeX">$A$ </tex-math></inline-formula>, other provides sufficient conditions for stationary point be point. An active subspace second-order algorithm regularized optimization is proposed. The sets are estimated by an identification technique can accurately identify zero...