Delay-Aware Hierarchical Federated Learning
Robustness
Stochastic Gradient Descent
Federated Learning
Edge device
Distributed learning
DOI:
10.48550/arxiv.2303.12414
Publication Date:
2023-01-01
AUTHORS (4)
ABSTRACT
Federated learning has gained popularity as a means of training models distributed across the wireless edge. The paper introduces delay-aware hierarchical federated (DFL) to improve efficiency machine (ML) model by accounting for communication delays between edge and cloud. Different from traditional learning, DFL leverages multiple stochastic gradient descent iterations on local datasets within each global aggregation period intermittently aggregates parameters through servers in subnetworks. During synchronization, cloud server consolidates with outdated using local-global combiner, thus preserving crucial elements both, enhancing under presence delay. A set conditions is obtained achieve sub-linear convergence rate O(1/k) strongly convex smooth loss functions. Based these findings, an adaptive control algorithm developed DFL, implementing policies mitigate energy consumption latency while aiming sublinear convergence. Numerical evaluations show DFL's superior performance terms faster convergence, reduced resource consumption, robustness against compared existing FL algorithms. In summary, this proposed method offers improved results when dealing both non-convex
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....