DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning
Feature (linguistics)
DOI:
10.1007/s44230-023-00032-4
Publication Date:
2023-07-06T06:02:26Z
AUTHORS (3)
ABSTRACT
Abstract Ensuring the transparency of machine learning models is vital for their ethical application in various industries. There has been a concurrent trend distributed designed to limit access training data privacy concerns. Such models, trained over horizontally or vertically partitioned data, present challenge explainable AI because explaining party may have biased view background partial feature space. As result, explanations obtained from different participants might not be consistent with one another, undermining trust product. This paper presents an Explainable Data Collaboration Framework based on model-agnostic additive attribution algorithm (KernelSHAP) and method privacy-preserving learning. In particular, we three algorithms scenarios explainability verify consistency experiments open-access datasets. Our results demonstrated significant (by at least factor 1.75) decrease discrepancies among users The proposed improves participants, which can enhance product enable
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (31)
CITATIONS (11)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....