Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning
FOS: Computer and information sciences
Computer Science - Cryptography and Security
Cryptography and Security (cs.CR)
DOI:
10.48550/arxiv.2404.03233
Publication Date:
2024-04-04
AUTHORS (4)
ABSTRACT
Machine unlearning has become a promising solution for fulfilling the "right to be forgotten", under which individuals can request deletion of their data from machine learning models. However, existing studies mainly focus on efficacy and efficiency methods, while neglecting investigation privacy vulnerability during process. With two versions model available an adversary, that is, original unlearned model, opens up new attack surface. In this paper, we conduct first understand extent leak confidential content data. Specifically, Learning as Service setting, propose inversion attacks reveal feature label information sample by only accessing model. The effectiveness proposed is evaluated through extensive experiments benchmark datasets across various architectures both exact approximate representative approaches. experimental results indicate sensitive As such, identify three possible defenses help mitigate attacks, at cost reducing utility study in paper uncovers underexplored gap between data, highlighting need careful design mechanisms implementing without leaking
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....