- Advanced Optimization Algorithms Research
- Optimization and Variational Analysis
- Sparse and Compressive Sensing Techniques
- Advanced Database Systems and Queries
- Parallel Computing and Optimization Techniques
- Matrix Theory and Algorithms
- Distributed and Parallel Computing Systems
- Numerical methods in inverse problems
- Optimization and Search Problems
- Complexity and Algorithms in Graphs
- Scheduling and Optimization Algorithms
- Machine Learning and Algorithms
- Optimization and Packing Problems
- Vehicle Routing Optimization Methods
- Stochastic Gradient Optimization Techniques
- Risk and Portfolio Optimization
- Distributed systems and fault tolerance
- Interconnection Networks and Systems
- Big Data and Business Intelligence
- Data Management and Algorithms
- Advanced Data Storage Technologies
- Imbalanced Data Classification Techniques
- Mathematical Inequalities and Applications
- Transportation and Mobility Innovations
- Model Reduction and Neural Networks
Rutgers, The State University of New Jersey
2011-2023
Rutgers Sexual and Reproductive Health and Rights
2001-2023
University of Stuttgart
2023
Sandia National Laboratories California
2021
Management Sciences (United States)
2019-2020
Infosys (India)
2018
Intelligent Machines (Sweden)
1994-2002
Universidade de São Paulo
2001
Mathematical Sciences Research Institute
1992-1994
Massachusetts Institute of Technology
1987-1988
A Bregman function is a strictly convex, differentiable that induces well-behaved distance measure or D-function on Euclidean space. This paper shows that, for every function, there exists “nonlinear” version of the proximal point algorithm, and presents an accompanying convergence theory. Applying this generalization algorithm to convex programming, one obtains minimization Censor Zenios, wide variety new multiplier methods. These methods are different from those studied by Kort Bertsekas,...
Consider two variations of the method multipliers, or classical augmented Lagrangian for convex programming. The proximal multipliers adjoins quadratic primal terms to Lagrangian, and has a stronger convergence theory than standard method. On other hand, alternating direction which uses special kind partial minimization is conducive derivation decomposition methods finding application in parallel computing. This note shows combining features these variations. closely related some algorithms...
We describe a general projective framework for finding zero of the sum n maximal monotone operators over real Hilbert space. Unlike prior methods this problem, we neither assume $n=2$ nor first reduce problem to case $n=2$. Our analysis defines closed convex extended solution set which can construct separating hyperplane by individually evaluating resolvent each operator. At cost single, computationally simple projection step, gives rise family splitting unprecedented flexibility: numerous...
Drawing on recent developments in discrete time fixed income options theory, we propose a stochastic programming procedure, which call dedication, for managing asset/liability portfolios with interest rate contingent claims. The model uses scenario generation to combine deterministic dedication techniques duration matching methods, and provides the portfolio manager risk/return Pareto optimal frontier from may be selected based individual risk attitudes. We employ metric that can interpreted...
This article applies splitting techniques developed for set-valued maximal monotone operators to affine variational inequalities, including, as a special case, the classical linear complementarity problem. We give unified presentation of several algorithms operators, and then apply these results obtain two classes inequalities. The second class resembles matrix splitting, but has novel “under-relaxation” step, converges under more general conditions. In particular, convergence proofs do not...
This "proof of concept" paper describes parallel solution general mixed integer programs by a branch-and-bound algorithm on the CM-5 multiprocessing system. It goes beyond prior work implementing reasonably realistic general-purpose programming algorithm, as opposed to specialized method for narrow class problems. shows how use capabilities produce an efficient implementation employing centrally controlled search, achieving near-linear speedups using 64–128 processors variety difficult...