FedEx: Expediting Federated Learning over Heterogeneous Mobile Devices by Overlapping and Participant Selection
Expediting
Federated Learning
DOI:
10.48550/arxiv.2407.00943
Publication Date:
2024-06-30
AUTHORS (7)
ABSTRACT
Training latency is critical for the success of numerous intrigued applications ignited by federated learning (FL) over heterogeneous mobile devices. By revolutionarily overlapping local gradient transmission with continuous computing, FL can remarkably reduce its training homogeneous clients, yet encounter severe model staleness, drifts, memory cost and straggler issues in environments. To unleash full potential overlapping, we propose, FedEx, a novel \underline{fed}erated approach to \underline{ex}pedite devices under data, computing wireless heterogeneity. FedEx redefines procedure staleness ceilings constrain consumption make compatible participation selection (PS) designs. Then, characterizes PS utility function considering reduced provides holistic solution address issue. also introduces simple but effective metric trigger order avoid drifts. Experimental results show that compared peer designs, demonstrates substantial reductions limited cost.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....