Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design

Upload Edge device Stackelberg competition Federated Learning
DOI: 10.48550/arxiv.2211.04132 Publication Date: 2022-01-01
ABSTRACT
Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train machine model by sharing the updates instead of raw data with server. However, heterogeneous computational and communication resources give rise to stragglers that significantly decelerate process. To mitigate this issue, we propose novel FL framework named stochastic coded federated (SCFL) leverages computing techniques. In SCFL, before process starts, each device uploads dataset server, which is generated adding Gaussian noise projected local dataset. During training, server computes gradients on global compensate for missing straggling devices. We design gradient aggregation scheme ensure aggregated update an unbiased estimate desired update. Moreover, enables periodical averaging improve efficiency. characterize tradeoff between convergence performance privacy guarantee SCFL. particular, more noisy provides stronger protection but results in degradation. further develop contract-based incentive mechanism coordinate such conflict. The simulation show SCFL learns better within given time achieves privacy-performance than baseline methods. addition, proposed grants conventional Stackelberg game approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....