FedEAT: A Robustness Optimization Framework for Federated LLMs

Robustness
DOI: 10.48550/arxiv.2502.11863 Publication Date: 2025-02-17
ABSTRACT
Significant advancements have been made by Large Language Models (LLMs) in the domains of natural language understanding and automated content creation. However, they still face persistent problems, including substantial computational costs inadequate availability training data. The combination Federated Learning (FL) LLMs (federated LLMs) offers a solution leveraging distributed data while protecting privacy, which positions it as an ideal choice for sensitive domains. suffer from robustness challenges, heterogeneity, malicious clients, adversarial attacks, greatly hinder their applications. We first introduce problems federated LLMs, to address these we propose FedEAT (Federated Embedding space Adversarial Training), novel framework that applies embedding client LLM employs robust aggregation approach, specifically geometric median aggregation, enhance LLMs. Our experiments demonstrate effectively improves with minimal performance loss.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....