privGAN: Protecting GANs from membership inference attacks at low cost
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Cryptography and Security
Statistics - Machine Learning
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
0202 electrical engineering, electronic engineering, information engineering
Machine Learning (stat.ML)
02 engineering and technology
Cryptography and Security (cs.CR)
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2001.00071
Publication Date:
2020-01-01
AUTHORS (4)
ABSTRACT
Generative Adversarial Networks (GANs) have made releasing of synthetic images a viable approach to share data without the original dataset. It has been shown that such can be used for variety downstream tasks as training classifiers would otherwise require dataset shared. However, recent work GAN models and their synthetically generated infer set membership by an adversary who access entire some auxiliary information. Current approaches mitigate this problem (such DPGAN) lead dramatically poorer sample quality than non--private GANs. Here we develop new architecture (privGAN), where generator is trained not only cheat discriminator but also defend inference attacks. The mechanism provides protection against mode attack while leading negligible loss in performances. In addition, our algorithm explicitly prevent overfitting set, which explains why so effective. main contributions paper are: i) propose novel generate privacy preserving manner additional hyperparameter tuning selection, ii) provide theoretical understanding optimal solution privGAN function, iii) demonstrate effectiveness model several white black--box attacks on benchmark datasets, iv) three common datasets performance when compared
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....