FedBoosting: Federated learning with gradient protected boosting for text recognition
Federated Learning
Boosting
Homomorphic Encryption
Differential Privacy
Gradient boosting
Data Sharing
DOI:
10.1016/j.neucom.2023.127126
Publication Date:
2023-12-12T18:24:27Z
AUTHORS (5)
ABSTRACT
Conventional machine learning methodologies require the centralization of data for model training, which may be infeasible in situations where sharing limitations are imposed due to concerns such as privacy and gradient protection. The Federated Learning (FL) framework enables collaborative a shared without necessitating or among proprietors. Nonetheless, this paper, we demonstrate that generalization capability joint is suboptimal Non-Independent Non-Identically Distributed (Non-IID) data, particularly when employing Averaging (FedAvg) strategy result weight divergence phenomenon. Consequently, present novel boosting algorithm FL address both leakage challenges, well facilitate accelerated convergence gradient-based optimization. Furthermore, introduce secure protocol incorporates Homomorphic Encryption (HE) Differential Privacy (DP) safeguard against attacks. Our empirical evaluation demonstrates proposed Boosting (FedBoosting) technique yields significant enhancements prediction accuracy computational efficiency visual text recognition task on publicly available benchmarks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (60)
CITATIONS (7)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....