- Matrix Theory and Algorithms
- Advanced Numerical Methods in Computational Mathematics
- Electromagnetic Scattering and Analysis
- Advanced Optimization Algorithms Research
- Numerical methods for differential equations
- Advanced Measurement and Metrology Techniques
- Optical measurement and interference techniques
- Model Reduction and Neural Networks
- Stochastic Gradient Optimization Techniques
- Sparse and Compressive Sensing Techniques
- Surface Roughness and Optical Measurements
- Numerical methods in engineering
- Numerical Methods and Algorithms
- Advanced Surface Polishing Techniques
- Machine Learning and ELM
- Structural Health Monitoring Techniques
- Advanced Bandit Algorithms Research
- Tensor decomposition and applications
- Image Processing Techniques and Applications
- Advanced Numerical Analysis Techniques
- Differential Equations and Boundary Problems
- Advanced Control Systems Optimization
- Polynomial and algebraic computation
- Adhesion, Friction, and Surface Interactions
- Diverse Industrial Engineering Technologies
University of Trieste
2020-2024
University of Padua
2009-2019
Civita
2018
Centro de Investigaciones en Optica
1999-2014
Universitat Politècnica de València
2000
Philips (United States)
1982-1984
Summary In this paper, we describe and analyze the spectral properties of several exact block preconditioners for a class double saddle point problems. Among all these, consider an inexact version triangular preconditioner providing extremely fast convergence (F)GMRES method. We develop analysis preconditioned matrix showing that complex eigenvalues lie in circle center radius 1, while real are described terms roots third order polynomial with coefficients. Numerical examples reported to...
Summary In this paper, we present preconditioning techniques to accelerate the convergence of Krylov solvers at each step an Inexact Newton's method for computation leftmost eigenpairs large and sparse symmetric positive definite matrices arising in large‐scale scientific computations. We propose a two‐stage spectral strategy: The first stage produces very rough approximation number eigenvectors. second uses these approximations as starting vectors also construct tuned preconditioner from...
Summary In this paper, we study a class of tuned preconditioners that will be designed to accelerate both the DACG–Newton method and implicitly restarted Lanczos for computation leftmost eigenpairs large sparse symmetric positive definite matrices arising in large‐scale scientific computations. These tuning strategies are based on low‐rank modifications given initial preconditioner. We present some theoretical properties preconditioned matrix. experimentally show how aforementioned methods...
Abstract In this article, we address the efficient numerical solution of linear and quadratic programming problems, often large scale. With aim, devise an infeasible interior point method, blended with proximal method multipliers, which in turn results a primal‐dual regularized method. Application gives rise to sequence increasingly ill‐conditioned systems cannot always be solved by factorization methods, due memory CPU time restrictions. We propose novel preconditioning strategy is based on...
Summary We focus on efficient preconditioning techniques for sequences of Karush‐Kuhn‐Tucker (KKT) linear systems arising from the interior point (IP) solution large convex quadratic programming problems. Constraint preconditioners (CPs), although very effective in accelerating Krylov methods KKT systems, have a high computational cost some instances, because their factorization may be most time‐consuming task at each IP iteration. overcome this problem by computing CP scratch only selected...
Abstract In this work, we introduce a novel stochastic second-order method, within the framework of non-monotone trust-region approach, for solving unconstrained, nonlinear, and non-convex optimization problems arising in training deep neural networks. The proposed algorithm makes use subsampling strategies that yield noisy approximations finite sum objective function its gradient. We an adaptive sample size strategy based on inexpensive additional sampling to control resulting approximation...
The present paper describes a parallel preconditioned algorithm for the solution of partial eigenvalue problems large sparse symmetric matrices, on computers. Namely, we consider Deflation‐Accelerated Conjugate Gradient (DACG) accelerated by factorized‐sparse‐approximate‐inverse‐ (FSAI‐) type preconditioners. We an enhanced implementation FSAI preconditioner and make use recently developed Block FSAI‐IC preconditioner, which combines Jacobi‐IC Results onto matrices size arising from finite...
In this paper, we propose an efficiently preconditioned Newton method for the computation of leftmost eigenpairs large and sparse symmetric positive definite matrices. A sequence preconditioners based on BFGS update formula is proposed, conjugate gradient solution linearized system to solve Au=q(u) u, q(u) being Rayleigh quotient. We give theoretical evidence that Jacobians remains close identity matrix if initial Jacobian so. Numerical results onto matrices arising from various realistic...
We propose a parallel preconditioner for the Newton method in computation of leftmost eigenpairs large and sparse symmetric positive definite matrices. A sequence preconditioners starting from an enhanced approximate inverse RFSAI (Bergamaschi Martínez, 2012) enriched by BFGS-like update formula is proposed to accelerate preconditioned conjugate gradient solution linearized system solve<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M1"><mml:mi>A</mml:mi><mml:mi...
Let $Ax = b$ be a linear system where A is symmetric positive definite matrix. Preconditioners for the conjugate gradient method based on multisplittings obtained by incomplete Choleski factorizations of are studied. The validity these preconditioners when an M-matrix proved and parallel implementation presented.