DBFed: Debiasing Federated Learning Framework based on Domain-Independent
Debiasing
DOI:
10.48550/arxiv.2307.05582
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
As digital transformation continues, enterprises are generating, managing, and storing vast amounts of data, while artificial intelligence technology is rapidly advancing. However, it brings challenges in information security data security. Data refers to the protection from unauthorized access, damage, theft, etc. throughout its entire life cycle. With promulgation implementation laws emphasis on privacy by organizations users, Privacy-preserving represented federated learning has a wide range application scenarios. Federated distributed machine computing framework that allows multiple subjects train joint models without sharing protect solve problem islands. among independent each other, differences quality may cause fairness issues modeling, such as bias subjects, resulting biased discriminatory models. Therefore, we propose DBFed, debiasing based domain-independent, which mitigates model explicitly encoding sensitive attributes during client-side training. This paper conducts experiments three real datasets uses five evaluation metrics accuracy quantify effect model. Most DBFed exceed those other comparative methods, fully demonstrating DBFed.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....