Marcos Raydan

ORCID: 0000-0003-0417-7981
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Optimization Algorithms Research
  • Matrix Theory and Algorithms
  • Sparse and Compressive Sensing Techniques
  • Optimization and Variational Analysis
  • Iterative Methods for Nonlinear Equations
  • Numerical methods in inverse problems
  • Electromagnetic Scattering and Analysis
  • Numerical methods for differential equations
  • Advanced Numerical Methods in Computational Mathematics
  • Stochastic Gradient Optimization Techniques
  • Advanced Multi-Objective Optimization Algorithms
  • Fractional Differential Equations Solutions
  • Statistical and numerical algorithms
  • Structural Health Monitoring Techniques
  • Risk and Portfolio Optimization
  • Optimization and Search Problems
  • Advanced Numerical Analysis Techniques
  • Model Reduction and Neural Networks
  • Numerical methods in engineering
  • Differential Equations and Numerical Methods
  • Metaheuristic Optimization Algorithms Research
  • Mathematical Inequalities and Applications
  • Optimization and Packing Problems
  • graph theory and CDMA systems
  • Constraint Satisfaction and Optimization

Universidade Nova de Lisboa
2022-2024

Simón Bolívar University
2008-2017

Université de Picardie Jules Verne
2015

Central University of Venezuela
2001-2014

University of Kentucky
1993-1994

Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical schemes extended to include a nonmonotone steplength strategy that is based Grippo--Lampariello--Lucidi line search. In particular, combined with spectral choice accelerate convergence process. addition nonlinear path, feasible used as search direction avoid additional trial projections during one-dimensional Convergence properties and extensive...

10.1137/s1052623497330963 article EN SIAM Journal on Optimization 2000-01-01

The Barzilai and Borwein gradient method for the solution of large scale unconstrained minimization problems is considered. This requires few storage locations very inexpensive computations. Furthermore, it does not guarantee descent in objective function no line search required. Recently, global convergence convex quadratic case has been established. However, nonquadratic case, needs to be incorporated a globalization scheme. In this work, nonmonotone strategy that guarantees combined with...

10.1137/s1052623494266365 article EN SIAM Journal on Optimization 1997-02-01

In a recent paper, Barzilai and Borwein presented new choice of steplength for the gradient method. Their does not guarantee descent in objective function greatly speeds up convergence They analysis their method only two-dimensional quadratic case. We establish when applied to minimization strictly convex any number variables.

10.1093/imanum/13.3.321 article EN IMA Journal of Numerical Analysis 1993-01-01

A fully derivative-free spectral residual method for solving large-scale nonlinear systems of equations is presented. It uses in a systematic way the vector as search direction, steplength that produces nonmonotone process and globalization strategy allows this behavior. The global convergence analysis combined scheme An extensive set numerical experiments indicate new combination competitive frequently better than well-known Newton-Krylov methods problems also

10.1090/s0025-5718-06-01840-0 article EN public-domain Mathematics of Computation 2006-04-11

Fortran 77 software implementing the SPG method is introduced. a nonmonotone projected gradient algorithm for solving large-scale convex-constrained optimization problems. It combines classical with spectral choice of steplength and line-search strategy. The user provides objective function values, projections onto feasible set. Some recent numerical tests are reported on very large location problems, indicating that substantially more efficient than existing general-purpose problems which...

10.1145/502800.502803 article EN ACM Transactions on Mathematical Software 2001-09-01

Journal Article Inexact spectral projected gradient methods on convex sets Get access Ernesto G. Birgin, Birgin Search for other works by this author on: Oxford Academic Google Scholar José Mario Martínez, Martínez Marcos Raydan IMA of Numerical Analysis, Volume 23, Issue 4, October 2003, Pages 539–559, https://doi.org/10.1093/imanum/23.4.539 Published: 01 2003

10.1093/imanum/23.4.539 article EN IMA Journal of Numerical Analysis 2003-10-01

Abstract Two algorithms are introduced that show exceptional promise in finding molecular conformations using distance geometry on nuclear magnetic resonance data. The first algorithm is a gradient version of the majorization from multidimensional scaling. main contribution large decrease CPU time. second an iterative between possible obtained and permissible data points near configuration. These ideas similar to alternating least squares or projections convex sets. iterations significantly...

10.1002/jcc.540140115 article EN Journal of Computational Chemistry 1993-01-01

Over the last two decades, it has been observed that using gradient vector as a search direction in large-scale optimization may lead to efficient algorithms. The effectiveness relies on choosing step lengths according novel ideas are related spectrum of underlying local Hessian rather than standard decrease objective function. A review these so-called spectral projected methods for convex constrained is presented. To illustrate performance low-cost schemes, an problem set positive definite...

10.18637/jss.v060.i03 article EN cc-by Journal of Statistical Software 2014-01-01

The spectral gradient method has proved to be effective for solving large-scale optimization problems. In this work we extend the approach solve nonlinear systems of equations. We consider a strategy based on nonmonotone line search techniques guarantee global convergence. and discuss implementation details compare performance our new with recent implementations inexact Newton schemes Krylov subspace inner iterative methods linear systems. Our numerical experiments indicate that competes...

10.1080/10556780310001610493 article EN Optimization methods & software 2003-10-01

A generalization of the steepest descent and other methods for solving a large scale symmetric positive definitive system Ax = b is presented. Given integer m, new iteration given by $x_{k+1} x_k - \lambda (x_{\nu(k)}) (A b)$, where $\lambda (x_{\nu(k)})$ step at previous $\nu(k) \in \{k, k-1, \ldots,$ max$\{0,k-m\}\}$. The global convergence to solution problem established under more general framework, numerical experiments are performed that suggest some strategies choice $\nu(k)$ give...

10.1137/s003614299427315x article EN SIAM Journal on Numerical Analysis 1998-01-01

10.1023/a:1013708715892 article EN Computational Optimization and Applications 2002-01-01

High-dimensional omics data often contain more variables than observations, which negatively impacts the performance of classical analysis methods. Dimensionality reduction is typically addressed through variable selection strategies that incorporate a penalty term into model. While effective for selecting task-specific variables, this approach may not be optimal when goal to preserve dataset structure and overall biological information multiple downstream analyses. In such cases, priori...

10.1101/2025.03.11.642670 preprint EN cc-by-nc-nd bioRxiv (Cold Spring Harbor Laboratory) 2025-03-14

An $n\times n$ correlation matrix has k factor structure if its off-diagonal agrees with that of a rank matrix. Such matrices arise, for example, in models collateralized debt obligations (CDOs) and multivariate time series. We analyze the properties these and, particular, obtain an explicit formula one case. Our main focus is on nearness problem finding nearest $C(X) = \diag(I-XX^T) + XX^T$ to given symmetric matrix, subject natural nonlinear constraints elements k$ X, where distance...

10.1137/090776718 article EN SIAM Journal on Matrix Analysis and Applications 2010-01-01

10.1016/s0024-3795(00)00327-x article EN publisher-specific-oa Linear Algebra and its Applications 2001-05-01

Dykstra's algorithm is a suitable alternating projection scheme for solving the optimization problem of finding closest point to one given in intersection finite number closed and convex sets. It has been recently used wide variety applications. However, practice, commonly stopping criteria are not robust could stop iterative process prematurely at that does solve problem. In this work we present counterexample illustrate weakness criteria, then develop rules. Additional experimental results...

10.1137/03060062x article EN SIAM Journal on Scientific Computing 2005-01-01

10.1016/j.laa.2010.02.006 article EN publisher-specific-oa Linear Algebra and its Applications 2010-03-17

We apply Dykstra's alternating projection algorithm to the constrained least-squares matrix problem that arises naturally in statistics and mathematical economics. In particular, we are concerned with of finding closest symmetric positive definite bounded patterned matrix, Frobenius norm, a given matrix. this work, state as minimization convex function over intersection finite collection closed sets vector space square matrices. present iterative schemes exploit geometry problem, for which...

10.1002/(sici)1099-1506(199611/12)3:6<459::aid-nla82>3.0.co;2-s article EN Numerical Linear Algebra with Applications 1996-11-01
Coming Soon ...