Distributed Optimization for Client-Server Architecture with Negative Gradient Weights
Upload
DOI:
10.48550/arxiv.1608.03866
Publication Date:
2016-01-01
AUTHORS (2)
ABSTRACT
Availability of both massive datasets and computing resources have made machine learning predictive analytics extremely pervasive. In this work we present a synchronous algorithm architecture for distributed optimization motivated by privacy requirements posed applications in learning. We an the recently proposed multi-parameter-server architecture. consider group parameter servers that learn model based on randomized gradients received from clients. Clients are computational entities with private (inducing objective function), evaluate upload to servers. The perform updates share parameters other prove can optimize overall function very general involving $C$ clients connected $S$ arbitrary time varying topology forming network.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....