Elad Hazan

ORCID: 0000-0002-1566-3216
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Bandit Algorithms Research
  • Machine Learning and Algorithms
  • Sparse and Compressive Sensing Techniques
  • Stochastic Gradient Optimization Techniques
  • Optimization and Search Problems
  • Reinforcement Learning in Robotics
  • Complexity and Algorithms in Graphs
  • Advanced Optimization Algorithms Research
  • Advanced Control Systems Optimization
  • Data Stream Mining Techniques
  • Machine Learning and Data Classification
  • Auction Theory and Applications
  • Neural Networks and Applications
  • Markov Chains and Monte Carlo Methods
  • Distributed Sensor Networks and Detection Algorithms
  • Adversarial Robustness in Machine Learning
  • Model Reduction and Neural Networks
  • Game Theory and Applications
  • Domain Adaptation and Few-Shot Learning
  • Matrix Theory and Algorithms
  • Adaptive Dynamic Programming Control
  • Face and Expression Recognition
  • Advanced Graph Theory Research
  • Risk and Portfolio Optimization
  • Gaussian Processes and Bayesian Inference

Princeton University
2014-2024

Google (United States)
2018-2024

University of Waterloo
2019

The University of Texas at Austin
2019

Technion – Israel Institute of Technology
2009-2015

Microsoft Research (India)
2015

IBM (United States)
2006-2010

IBM Research - Almaden
2007-2010

Laboratoire d'Informatique de Paris-Nord
2005

Tel Aviv University
2003

Algorithms in varied fields use the idea of maintaining a distribution over certain set and multiplicative update rule to iteratively change these weights.Their analyses are usually very similar rely on an exponential potential function.In this survey we present simple meta-algorithm that unifies many disparate algorithms derives them as instantiations meta-algorithm.We feel since its analysis so simple, applications broad, it should be standard part courses, like "divide conquer."

10.4086/toc.2012.v008a006 article EN cc-by Theory of Computing 2012-01-01

10.1561/2400000013 article EN Foundations and Trends® in Optimization 2016-01-01

We experimentally study on-line investment algorithms first proposed by Agarwal and Hazan extended et al. which achieve almost the same wealth as best constant-rebalanced portfolio determined in hindsight. These are to combine optimal logarithmic regret bounds with efficient deterministic computability. They based on Newton method for offline optimization which, unlike previous approaches, exploits second order information. After analyzing algorithm using potential function introduced Hazan,...

10.1145/1143844.1143846 article EN 2006-01-01

10.1007/s00037-006-0205-6 article EN Computational Complexity 2006-04-25

We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly the underlying dimension and number of training examples. The complexity our find even faster than gradient descent critical point. Our applies general class problems including neural network other objectives arising machine learning.

10.1145/3055399.3055464 article EN 2017-06-15

The quest for a polynomial-time approximation scheme (PTAS) Nash equilibrium in two-player game, which emerged as major open question algorithmic game theory, seeks to circumvent the PPAD-completeness of finding an (exact) by approximate equilibrium. closely related problem maximizing certain objective, such social welfare, was shown be NP-hard [Gilboa and Zemel, Games Econom. Behav., 1 (1989), pp. 80–93]. However, this NP-hardness is unlikely extend equilibria, since latter admits...

10.1137/090766991 article EN SIAM Journal on Computing 2011-01-01

Conventional wisdom in deep learning states that increasing depth improves expressiveness but complicates optimization. This paper suggests that, sometimes, can speed up The effect of on optimization is decoupled from by focusing settings where additional layers amount to overparameterization - linear neural networks, a well-studied model. Theoretical analysis, as well experiments, show here acts preconditioner which may accelerate convergence. Even simple convex problems such regression...

10.48550/arxiv.1802.06509 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Suppose an agent is in a (possibly unknown) Markov Decision Process the absence of reward signal, what might we hope that can efficiently learn to do? This work studies broad class objectives are defined solely as functions state-visitation frequencies induced by how behaves. For example, one natural, intrinsically defined, objective problem for policy which induces distribution over state space uniform possible, be measured entropic sense. We provide efficient algorithm optimize such...

10.48550/arxiv.1812.02690 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP is via interior point methods. We use a Lagrangian-relaxation based (modified from papers of Plotkin, Shmoys, and Tardos (PST), Klein Lu) to derive faster approximately several families relaxations. The are upon some improvements PST ideas - which lead new results even their framework as well approximate eigenvalue computations by using random sampling.

10.1109/sfcs.2005.35 article EN 2005-01-01

We study online learning in an oblivious changing environment. The standard measure of regret bounds the difference between cost learner and best decision hindsight. Hence, minimizing algorithms tend to converge static optimum, clearly a suboptimal behavior environments. On other hand, various metrics proposed strengthen allow for more dynamic produce inefficient algorithms.We propose different performance metric which strengthens measures with respect comparator. then describe series...

10.1145/1553374.1553425 article EN 2009-06-14
Coming Soon ...