An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

Proximal Gradient Methods
DOI: 10.48550/arxiv.1409.8547 Publication Date: 2014-01-01
ABSTRACT
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in is private cost function belonging node, and only nodes connected by an edge can directly communicate with other. This optimization model abstracts number applications sensing machine learning. show that any limit point DFAL iterates optimal; for $ε>0$, $ε$-optimal $ε$-feasible solution be computed within $\mathcal{O}(\log(ε^{-1}))$ iterations, which require $\mathcal{O}(\frac{ψ_{\max}^{1.5}}{d_{\min}} ε^{-1})$ proximal gradient computations communications per node total, $ψ_{\max}$ denotes largest eigenvalue graph Laplacian, $d_{\min}$ minimum degree graph. also asynchronous version incorporating randomized block coordinate descent methods; demonstrate efficiency on large scale sparse-group LASSO problems.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....