PGAN-KD:Member Privacy Protection of GANs Based on Knowledge Distillation

Discriminator Clipping (morphology) Private information retrieval
DOI: 10.1109/bigdata59044.2023.10386917 Publication Date: 2024-01-22T18:28:47Z
ABSTRACT
Generative adversarial networks (GANs) have been widely used for creating diverse data such as images, audio, and videos. However, the training of GANs often contain sensitive information, they are vulnerable to privacy attacks on dataset, membership inference (MIAs). To improve resistance MIA while ensuring their performance, we design a novel GAN framework PGAN-KD (member Privacy protection based Knowledge Distillation). prevents discriminator from leaking information by introducing knowledge distillation gradient clipping. Specifically, it adopts an extra teacher distilling then transfer student discriminator, thereby isolating attacker indirectly obtaining private through generator. In addition, itself MIAs evaluate performance PGAN-KD, conducted experiments both real simulated datasets. The results indicate that achieves 7.8% improvement in levels maintaining similar generation with baselines.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (34)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....