Communication-Efficient Multimodal Federated Learning: Joint Modality and Client Selection

Modality (human–computer interaction) Federated Learning Multimodal therapy
DOI: 10.48550/arxiv.2401.16685 Publication Date: 2024-01-29
ABSTRACT
Multimodal federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities. However, key challenges multimodal remain unaddressed, particularly heterogeneous network where: (i) the set of modalities collected by each client will be diverse, and (ii) communication limitations prevent from uploading all their locally trained modality models server. In this paper, we propose Federated with joint Modality Client selection (mmFedMC), a new methodology that can tackle above-mentioned settings. The algorithm incorporates two main components: (a) A for client, which weighs impact modality, gauged Shapley value analysis, size as gauge overhead, against (iii) frequency updates, denoted recency, enhance generalizability. (b) strategy server based on local loss at client. Experiments five real-world datasets demonstrate ability mmFedMC achieve comparable accuracy several baselines while reducing overhead over 20x. demo video our is available https://liangqiy.com/mmfedmc/.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....