A General Framework for Auditing Differentially Private Machine Learning

Private information retrieval Implementation
DOI: 10.48550/arxiv.2210.08643 Publication Date: 2022-01-01
ABSTRACT
We present a framework to statistically audit the privacy guarantee conferred by differentially private machine learner in practice. While previous works have taken steps toward evaluating loss through poisoning attacks or membership inference, they been tailored specific models demonstrated low statistical power. Our work develops general methodology empirically evaluate of learning implementations, combining improved search and verification methods with toolkit influence-based attacks. demonstrate significantly auditing power over approaches on variety including logistic regression, Naive Bayes, random forest. method can be used detect violations due implementation errors misuse. When are not present, it aid understanding amount information that leaked from given dataset, algorithm, specification.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....