AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive Pruning
Pruning
DOI:
10.48550/arxiv.2106.14126
Publication Date:
2021-01-01
AUTHORS (5)
ABSTRACT
In multi-party collaborative learning, the parameter server sends a global model to each data holder for local training and then aggregates committed models globally achieve privacy protection. However, both dragger issue of synchronous learning staleness asynchronous make inefficient in real-world heterogeneous environments. We propose novel efficient framework named AdaptCL, which generates an adaptive sub-model dynamically from base holder, without any prior information about worker capability. All workers (data holders) approximately identical update time as fastest by equipping them with capability-adapted pruned models. Thus process can be dramatically accelerated. Besides, we tailor rate algorithm pruning approach AdaptCL. Meanwhile, AdaptCL provides mechanism handling trade-off between accuracy overhead combined other techniques accelerate further. Empirical results show that introduces little computing communication overhead. achieves savings more than 41\% on average improves low environment. highly environment, speedup 6.2x slight loss accuracy.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....