Enhancing Learning with Label Differential Privacy by Vector Approximation
Differential Privacy
DOI:
10.48550/arxiv.2405.15150
Publication Date:
2024-05-23
AUTHORS (6)
ABSTRACT
Label differential privacy (DP) is a framework that protects the of labels in training datasets, while feature vectors are public. Existing approaches protect by flipping them randomly, and then train model to make output approximate privatized label. However, as number classes $K$ increases, stronger randomization needed, thus performances these methods become significantly worse. In this paper, we propose vector approximation approach, which easy implement introduces little additional computational overhead. Instead each label into single scalar, our method converts random with components, whose expectations reflect class conditional probabilities. Intuitively, retains more information than scalar labels. A brief theoretical analysis shows performance only decays slightly $K$. Finally, conduct experiments on both synthesized real validate well practical method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....