Towards Probabilistic Verification of Machine Unlearning
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Cryptography and Security
Statistics - Machine Learning
0202 electrical engineering, electronic engineering, information engineering
Machine Learning (stat.ML)
02 engineering and technology
16. Peace & justice
Cryptography and Security (cs.CR)
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2003.04247
Publication Date:
2020-01-01
AUTHORS (4)
ABSTRACT
The right to be forgotten, also known as the erasure, is of individuals have their data erased from an entity storing it. status this long held notion was legally solidified recently by General Data Protection Regulation (GDPR) in European Union. Consequently, there a need for mechanisms whereby users can verify if service providers comply with deletion requests. In work, we take first step proposing formal framework study design such verification requests -- machine unlearning context systems that provide learning (MLaaS). Our allows rigorous quantification any mechanism based on standard hypothesis testing. Furthermore, propose novel backdoor-based and demonstrate its effectiveness certifying high confidence, thus providing basis quantitatively inferring unlearning. We evaluate our approach over range network architectures multi-layer perceptrons (MLP), convolutional neural networks (CNN), residual (ResNet), short-term memory (LSTM), well 5 different datasets. has minimal effect ML service's accuracy but provides confidence proposed works even only handful employ system ascertain compliance particular, just 5% participating, modifying half backdoor, merely 30 test queries, both false positive negative ratios below $10^{-3}$. show testing it against adaptive adversary uses state-of-the-art backdoor defense method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....