- Advanced Optimization Algorithms Research
- Optimization and Variational Analysis
- Sparse and Compressive Sensing Techniques
- Matrix Theory and Algorithms
- Iterative Methods for Nonlinear Equations
- Advanced Control Systems Optimization
- Advanced Multi-Objective Optimization Algorithms
- Risk and Portfolio Optimization
- Metaheuristic Optimization Algorithms Research
- Optimization and Mathematical Programming
- Advanced Numerical Methods in Computational Mathematics
- Topology Optimization in Engineering
- Evolutionary Algorithms and Applications
- Numerical methods for differential equations
- Fuzzy Systems and Optimization
- Complexity and Algorithms in Graphs
- Adaptive optics and wavefront sensing
- Composite Material Mechanics
- Vehicle Routing Optimization Methods
- Fixed Point Theorems Analysis
- Polynomial and algebraic computation
- Differential Equations and Numerical Methods
- Advanced Topology and Set Theory
- Advanced Banach Space Theory
- Advanced Bandit Algorithms Research
Universidade de São Paulo
2016-2025
Universidad San Pedro
2023
Universidade Federal de São Paulo
2011-2013
Universidade Estadual de Campinas (UNICAMP)
2008-2011
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate Karush–Kuhn–Tucker and approximate gradient projection are analysed in this work. These not necessarily equivalent. Implications between different counter-examples will be shown. Algorithmic consequences discussed.
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on assumption many subsets of gradients active constraints preserve locally. A major open question was to identify exact set whose properties had be preserved locally and would still work as a This done in first CQ, which we call rank subspace component (CRSC) CQ also preserves good RCPLD, such local stability validity an error...
In recent years, the theoretical convergence of iterative methods for solving nonlinear constrained optimization problems has been addressed using sequential optimality conditions, which are satisfied by minimizers independently constraint qualifications (CQs). Even though there is a considerable literature devoted to conditions standard optimization, same not true mathematical programs with complementarity constraints (MPCCs). this paper, we show that established suitable analysis...
Sequential optimality conditions have recently played an important role on the analysis of global convergence optimization algorithms towards first-order stationary points, justifying their stopping criteria. In this article, we introduce a sequential condition that takes into account second-order information and allows us to improve assumptions several algorithms, which is our main goal. We also present companion constraint qualification less stringent than previous associated methods, like...
Generalized Nash equilibrium problems (GNEPs) are a generalization of the classic (NEPs), where each player's strategy set depends on choices other players. In this work we study constraint qualifications (CQs) and optimality conditions tailored for GNEPs, discuss their relations implications global convergence algorithms. We show surprising fact that, in contrast to case nonlinear programming, general Karush--Kuhn--Tucker (KKT) residual cannot be made arbitrarily small near solution GNEP....
Sequential optimality conditions play a major role in proving stronger global convergence results of numerical algorithms for nonlinear programming. Several extensions are described conic contexts, which many open questions have arisen. In this paper, we present new sequential the context general framework, explains and improves several known specific cases, such as semidefinite programming, second-order cone particular, show that feasible limit points sequences generated by augmented...
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate Karush–Kuhn–Tucker and approximate gradient projection are analysed in this work. These not necessarily equivalent. Implications between different counter-examples will be shown. Algorithmic consequences discussed.