Asynchronous Multi-Model Dynamic Federated Learning over Wireless Networks: Theory, Modeling, and Optimization
Asynchronous learning
DOI:
10.48550/arxiv.2305.13503
Publication Date:
2023-01-01
AUTHORS (4)
ABSTRACT
Federated learning (FL) has emerged as a key technique for distributed machine (ML). Most literature on FL focused ML model training (i) single task/model, with (ii) synchronous scheme updating parameters, and (iii) static data distribution setting across devices, which is often not realistic in practical wireless environments. To address this, we develop DMA-FL considering dynamic multiple downstream tasks/models over an asynchronous update architecture. We first characterize convergence via introducing scheduling tensors rectangular functions to capture the impact of system parameters performance. Our analysis sheds light joint device variables (e.g., number local gradient descent steps), decisions (i.e., when trains task), drifts performance different tasks. Leveraging these results, formulate optimization jointly configuring resource allocation strike efficient trade-off between energy consumption solver resulting non-convex mixed integer program employs constraint relaxations successive convex approximations guarantees. Through numerical experiments, reveal that substantially improves performance-efficiency tradeoff.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....