Mingyuan Cao

ORCID: 0000-0003-0183-1554
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Optimization Algorithms Research
  • Iterative Methods for Nonlinear Equations
  • Optimization and Variational Analysis
  • Matrix Theory and Algorithms
  • Sparse and Compressive Sensing Techniques
  • Tensor decomposition and applications
  • Model Reduction and Neural Networks
  • Seed and Plant Biochemistry
  • Elasticity and Material Modeling
  • Metaheuristic Optimization Algorithms Research
  • Stochastic Gradient Optimization Techniques
  • Optimization and Mathematical Programming
  • Organoselenium and organotellurium chemistry
  • Advanced Numerical Methods in Computational Mathematics
  • Adipose Tissue and Metabolism
  • Advanced Control Systems Optimization
  • Advanced Bandit Algorithms Research
  • Polynomial and algebraic computation
  • Advanced Numerical Analysis Techniques
  • Numerical methods for differential equations
  • Carbon and Quantum Dots Applications
  • Selenium in Biological Systems
  • Nutrition, Genetics, and Disease
  • Fractional Differential Equations Solutions
  • Numerical methods in inverse problems

China Pharmaceutical University
2023-2025

Beihua University
2011-2024

Yangon University of Economics
2022

Jilin University
2018

Quinoa is a nutrient-rich pseudocereal with lower glycemic index and load. However, its therapeutic potency underlying mechanism against insulin resistance (IR) have not been fully elucidated. In this work, network pharmacology was applied to screen IR targets their related pathways. The efficacy of black quinoa polyphenols (BQP) on improvement were evaluated uncovered based the model in vitro combined molecular docking. Ten phenolic constituents BQP detected, results show that PI3K/Akt...

10.1021/acs.jafc.3c05900 article EN Journal of Agricultural and Food Chemistry 2023-11-22

Abstract The spectral conjugate gradient methods are very interesting and have been proved to be effective for strictly convex quadratic minimisation. In this paper, a new method is proposed solve large-scale unconstrained optimisation problems. Motivated by the advantages of approximate optimal stepsize strategy used in method, we design scheme choices parameters. Furthermore, search direction satisfies property sufficient descent condition. Under some suitable assumptions, global...

10.1186/s13660-020-02375-z article EN cc-by Journal of Inequalities and Applications 2020-04-25

In this paper, a new adaptive Levenberg–Marquardt method is proposed to solve the nonlinear equations including supply chain optimization problems. We present update rule which segmented function on ratio between actual and predicted reductions of objective accept large number unsuccessful iterations avoid jumping in local areas. The global convergence quadratic are proved by using trust region technique error bound condition, respectively. addition, we use algorithm test symmetric...

10.3390/sym15030588 article EN Symmetry 2023-02-24

Abstract In this paper, we present a new conjugate gradient method using an acceleration scheme for solving large-scale unconstrained optimization. The generated search direction satisfies both the sufficient descent condition and Dai–Liao conjugacy independent of line search. Moreover, value parameter contains more useful information without adding computational cost storage requirements, which can improve numerical performance. Under proper assumptions, global convergence result proposed...

10.1186/s13660-019-2238-9 article EN cc-by Journal of Inequalities and Applications 2019-11-20

A new self-adaptive rule of trust region radius is introduced, which given by a piecewise function on the ratio between actual and predicted reductions objective function. method for unconstrained optimization problems presented. The convergence properties are established under reasonable assumptions. Preliminary numerical results show that significant robust solving problems.

10.1155/2014/610612 article EN cc-by Journal of Applied Mathematics 2014-01-01

We present a new Newton-like method for large-scale unconstrained nonconvex minimization. And straightforward limited memory quasi-Newton updating based on the modified equation is deduced to construct trust region subproblem, in which information of both function value and gradient used approximate Hessian. The global convergence algorithm proved. Numerical results indicate that proposed competitive efficient some classical test problems.

10.1155/2013/478407 article EN cc-by Abstract and Applied Analysis 2013-01-01

We propose and generalize a new nonlinear conjugate gradient method for unconstrained optimization. The global convergence is proved with the Wolfe line search. Numerical experiments are reported which support theoretical analyses show presented methods outperforming CGDESCENT method.

10.1155/2012/932980 article EN cc-by Journal of Applied Mathematics 2012-01-01

<abstract><p>In this work, we proposed a new trust region method for solving large-scale unconstrained optimization problems. The subproblem with simple form was constructed based on weak secant equations, which utilized both gradient and function values available information from the three most recent points. A modified Metropolis criterion used to determine whether accept trial step, an adaptive strategy update radius. global convergence locally superlinearly of algorithm were...

10.3934/math.2024413 article EN cc-by AIMS Mathematics 2024-01-01

<p>In this paper, aiming at the nonlinear equations, a new two-step Levenberg–Marquardt method was proposed. We presented parameter to obtain trial step. A modified Metropolis criterion used adjust upper bound of approximate The convergence analyzed under H$ \ddot{\rm o} $lderian local error condition and \ddot\rm o continuity Jacobian. Numerical experiments showed that algorithm is effective competitive in numbers functions, Jacobian evaluations iterations.</p>

10.3934/math.20241199 article EN cc-by AIMS Mathematics 2024-01-01

<p>In this paper, we propose a novel nonmonotone trust region method that incorporates the Metropolis criterion to construct new function sequence. This sequence is used update both ratio and iteration criterion, increasing likelihood of accepting current trial step introducing randomness into process. When not accepted, introduce an improved line search technique continue iteration. approach significantly reduces number subproblems need be solved, thereby saving computational...

10.3934/math.20241528 article EN cc-by AIMS Mathematics 2024-01-01

This paper presents a derivative-free conjugate gradient type algorithm for large-scale nonlinear systems of monotone equations.New search directions with superior numerical performance are constructed by introducing new parameter and particular spectral parameters.These inherit the stability RMIL direction satisfy sufficient descent condition independent step size.The method combines hyperplane projection line technique to compute iteration points.Under some appropriate assumptions, global...

10.4310/cms.2023.v21.n2.a11 article EN Communications in Mathematical Sciences 2023-01-01

Two classes of new nonlinear conjugate gradient methods are proposed in order to avoid the drawbacks FR and CD. By induction contradiction, we prove sufficient descent properties without any line search global convergence with Wolfe search. The numerical results for 10 classical unconstrained optimization problems respectively indicate that outperform other terms iteration, function calls, etc. effective.

10.1109/cso.2011.290 article EN 2011-04-01

Abstract The eigenvalues of tensors become more and important in the numerical multilinear algebra. In this paper, based on nonmonotone technique, an accelerated Levenberg–Marquardt (LM) algorithm is presented for computing ‐eigenvalues symmetric tensors, which LM step are computed at each iteration. We establish global convergence proposed using properties norms. Under local error‐bound condition, cubic derived. Numerical results show that method efficient.

10.1111/itor.12954 article EN International Transactions in Operational Research 2021-02-28

In this work, considering the advantages of spectral conjugate gradient method and quasi‐Newton method, a three‐term with random parameter is proposed. The in search direction new determined by minimizing Frobenius norm difference between matrix self‐scaled memoryless BFGS based on modified secant equation. Then, satisfying sufficient descent condition obtained. global convergence proved under appropriate assumptions. Numerical experiments show that our has better performance comparing...

10.1155/2022/8939770 article EN cc-by Journal of Mathematics 2022-01-01

<abstract><p>We transform the Z-eigenvalues of symmetric tensors into unconstrained optimization problems with a shifted parameter. An accelerated conjugate gradient method is proposed for solving these problems. If problem results in nonzero critical point, then it Z-eigenvector corresponding to Z-eigenvalue. Otherwise, we solve find In our method, new parameter modified CD parameter, and an presented by using quasi-Newton direction. The global convergence proved. Numerical...

10.3934/math.2023766 article EN cc-by AIMS Mathematics 2023-01-01

In this paper, a new three-term conjugate gradient algorithm is proposed to solve unconstrained optimization including regression problems. We minimize the distance between search direction matrix and self-scaling memoryless BFGS in Frobenius norm determine direction, which has same advantages as quasi-Newton method. At time, random parameter used so that satisfies sufficient descent condition. For uniformly convex functions general nonlinear functions, we all establish global convergence of...

10.11650/tjm/230503 article EN Taiwanese Journal of Mathematics 2023-06-06

Abstract In this paper, the inequality constrained optimization problem is transformed into a minimax problem. The objective function of latter essentially an exact penalty function, which can avoid appearance ill-conditioning case when parameter too large. For max-value containing 0 in problem, most previous literatures approximate max{0, f i (x)}(i = 1, 2, · , m) one by one, while modified flattened aggregate proposed paper 1 (x), m (x)} whole, brings great convenience to solve problems...

10.21203/rs.3.rs-1935382/v1 preprint EN cc-by Research Square (Research Square) 2022-08-10
Coming Soon ...