Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity

Edge device
DOI: 10.48550/arxiv.2112.13926 Publication Date: 2021-01-01
ABSTRACT
Federated learning (FL) has emerged as a popular technique for distributing machine across wireless edge devices. We examine FL under two salient properties of contemporary networks: device-server communication delays and device computation heterogeneity. Our proposed StoFedDelAv algorithm incorporates local-global model combiner into the synchronization step. theoretically characterize convergence behavior obtain optimal weights, which consider global delay expected local gradient error at each device. then formulate network-aware optimization problem tunes minibatch sizes devices to jointly minimize energy consumption training loss, solve non-convex through series convex approximations. simulations reveal that outperforms current art in FL, evidenced by obtained improvements objective.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....