- Stochastic Gradient Optimization Techniques
- Sparse and Compressive Sensing Techniques
- Optimization and Variational Analysis
- Risk and Portfolio Optimization
- Advanced Optimization Algorithms Research
- Markov Chains and Monte Carlo Methods
- Mining Techniques and Economics
- Distributed Control Multi-Agent Systems
- Statistical Methods and Inference
- Economic and Environmental Valuation
- Point processes and geometric inequalities
- Belt Conveyor Systems Engineering
- Cooperative Communication and Network Coding
- Complexity and Algorithms in Graphs
- Transportation Planning and Optimization
- Probabilistic and Robust Engineering Design
- Topology Optimization in Engineering
- Optimization and Mathematical Programming
- Water resources management and optimization
- Integrated Circuits and Semiconductor Failure Analysis
- Oil and Gas Production Techniques
- Advanced Image Processing Techniques
- Reservoir Engineering and Simulation Methods
- Computational Geometry and Mesh Generation
- Machine Learning and ELM
University of Arizona
2020-2024
Rogers (United States)
2021-2023
Pennsylvania State University
2016-2021
We consider minimizing $f(x) = \mathbb{E}[f(x,\omega)]$ when $f(x,\omega)$ is possibly nonsmooth and either strongly convex or in $x$. (I) Strongly convex. When $\mu-$strongly $x$, we propose a variable sample-size accelerated proximal scheme (VS-APM) apply it on $f_{\eta}(x)$, the ($\eta$-)Moreau smoothed variant of $\mathbb{E}[f(x,\omega)]$; term such as (m-VS-APM). three settings. (a) Bounded domains. In this setting, VS-APM displays linear convergence inexact gradient steps, each which...
The globalization of the manufacturing process and supply chain for electronic hardware has been driven by need to maximize profitability while lowering risk in a technologically advanced silicon sector. However, many IPs' security features have broken because rise successful attacks. Existing efforts frequently ignore numerous dangers favor fixing particular vulnerability. This inspired development unique method that uses emerging spin-based devices obfuscate circuitry secure intellectual...
Stochastic nonconvex-concave min-max saddle point problems appear in many machine learning and control including distributionally robust optimization, generative adversarial networks, learning. In this paper, we consider a class of nonconvex where the objective function satisfies Polyak-Łojasiewicz condition with respect to minimization variable it is concave maximization variable. The existing methods for solving often suffer from slow convergence and/or contain multiple loops. Our main...
We consider a stochastic variational inequality (SVI) problem with continuous and monotone mapping over closed convex set. In strongly regimes, we present variable sample-size averaging scheme (VS-Ave) that achieves linear rate an optimal oracle complexity. addition, the iteration complexity is shown to display muted dependence on condition number compared standard variance-reduced projection schemes. To contend merely maps, develop amongst first proximal-point algorithms sample-sizes...
In the last several years, stochastic quasi-Newton (SQN) methods have assumed increasing relevance in solving a breadth of machine learning and optimization problems. Inspired by recently presented SQN schemes [1]-[3], we consider merely convex possibly nonsmooth programs utilize sample-sizes to allow for variance reduction. To this end, make following contributions. (i) A regularized smoothed variable sample-size BFGS update (rsL-BFGS) is developed that can accommodate objectives utilizing...
Classical theory for quasi-Newton schemes has focused on smooth, deterministic, unconstrained optimization, whereas recent forays into stochastic convex optimization have largely resided in unconstrained, and strongly regimes. Naturally, there is a compelling need to address nonsmoothness, the lack of strong convexity, presence constraints. Accordingly, this paper presents framework that can process merely possibly nonsmooth (but smoothable) problems. We propose combines iterative smoothing...
The goal in this article is to approximate the Price of Stability (PoS) stochastic Nash games using approximation (SA) schemes. PoS among most popular metrics game theory and provides an avenue for estimating efficiency games. In particular, evaluating can help with designing efficient networked systems, including communication networks power market mechanisms. Motivated by absence methods computing PoS, first we consider optimization problems a nonsmooth merely convex objective function...
Given a sampling budget M, stochastic approximation (SA) schemes for constrained convex programs generally utilize single sample each projection, requiring an effort of M projection operations, possibly significant complexity. We present extragradient-based variable sample-size SA scheme (eg-VSSA) that uses N k samples at step where Σ ≤ M. make the following contributions: (i) In strongly regimes, expected error decays linearly in number steps; (ii) settings, if is increased suitable rates...
Given a sampling budget M, stochastic approximation (SA) schemes for constrained convex programs generally utilize single sample each projection, requiring an effort of M projection operations, possibly significant complexity. We present extragradient-based variable sample-size SA scheme (eg-VSSA) that uses N <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</sub> samples at step k where Σ ≤ M. make the following contributions: (i) In strongly...
We consider a stochastic Inverse Variational Inequality (IVI) problem defined by continuous and cocoercive map over closed convex set. Motivated the absence of performance guarantees for IVI, we present variance-reduced projection-based gradient method. Our proposed method ensures an almost sure convergence generated iterates to solution, establish rate guarantee. To verify our results, apply algorithm network equilibrium control problem.
We consider (stochastic) convex-concave saddle point (SP) problems with high-dimensional decision variables, arising in various machine learning problems. To contend the challenges computing full gradients, we employ a randomized block-coordinate primal-dual scheme which randomly selected primal and dual blocks of variables are updated. both deterministic stochastic settings, where partial gradients their sampled estimates used, respectively, at each iteration. investigate convergence...
This paper is focused on a stochastic quasi-variational inequality (SQVI) problem with continuous and strongly-monotone mapping over closed convex set where the projection onto constraint may not be easy to compute. We present an inexact variance reduced scheme solve SQVI problems analyzed its convergence rate oracle complexity. A linear of obtained by progressively increasing sample-size approximating operator. Moreover, we show how competition among blood donation organizations can modeled...
Abstract Current technologies have made the transition from surface to underground mining methods for mineral extraction feasible and economically viable. Determining point of one method other deposits that require exploitation with both is challenging. The existing research integrates production scheduling optimization determining depth maximizes net present value (NPV), making problem computationally intractable. However, these studies do not consider some realistic operational constraints...
While Variational Inequality (VI) is a well-established mathematical framework that subsumes Nash equilibrium and saddle-point problems, less known about its extension, Quasi-Variational Inequalities (QVI). QVI allows for cases where the constraint set changes as decision variable varies allowing more versatile setting. In this paper, we propose extra-gradient gradient-based methods solving class of monotone Stochastic (SQVI) establish rigorous convergence rate analysis these methods. Our...
In this paper, we address variational inequalities (VI) with a finite-sum structure. We introduce novel single-loop stochastic variance-reduced algorithm, incorporating the Bregman distance function, and establish an optimal convergence guarantee under monotone setting. Additionally, explore structured class of non-monotone problems that exhibit weak Minty solutions, analyze complexity our proposed method, highlighting significant improvement over existing approaches. Numerical experiments...
Binary classification of high-dimensional, low-sample-size datasets is feasible with channelized quadratic observers. Channel solutions can be optimized iteratively. A semi-supervised extension developed for unlabeled data smaller quantities labeled data.
We consider minimizing $f(x) = \mathbb{E}[f(x,ω)]$ when $f(x,ω)$ is possibly nonsmooth and either strongly convex or in $x$. (I) Strongly convex. When $μ-$strongly $x$, we propose a variable sample-size accelerated proximal scheme (VS-APM) apply it on $f_η(x)$, the ($η$-)Moreau smoothed variant of $\mathbb{E}[f(x,ω)]$; term such as (m-VS-APM). three settings. (a) Bounded domains. In this setting, VS-APM displays linear convergence inexact gradient steps, each which requires utilizing an...