FedED: Federated Learning via Ensemble Distillation for Medical Relation Extraction
Upload
Leverage (statistics)
Copying
Relationship extraction
Private information retrieval
DOI:
10.18653/v1/2020.emnlp-main.165
Publication Date:
2020-11-29T14:51:46Z
AUTHORS (6)
ABSTRACT
Unlike other domains, medical texts are inevitably accompanied by private information, so sharing or copying these is strictly restricted. However, training a relation extraction model requires collecting privacy-sensitive and storing them on one machine, which comes in conflict with privacy protection. In this paper, we propose privacy-preserving based federated learning, enables central no single piece of local data being shared exchanged. Though learning has distinct advantages protection, it suffers from the communication bottleneck, mainly caused need to upload cumbersome parameters. To overcome leverage strategy knowledge distillation. Such uses uploaded predictions ensemble models train without requiring uploading Experiments three publicly available datasets demonstrate effectiveness our method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (72)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....