Are Diffusion Models Vulnerable to Membership Inference Attacks?
Overfitting
Code (set theory)
DOI:
10.48550/arxiv.2302.01316
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate vulnerability diffusion to Membership Inference Attacks (MIAs), common concern. Our results indicate that existing MIAs designed GANs or VAE are largely ineffective models, either due inapplicable scenarios (e.g., requiring discriminator GANs) inappropriate assumptions closer distances between synthetic samples member samples). To address gap, propose Step-wise Error Comparing (SecMI), query-based MIA infers memberships by assessing matching forward process posterior estimation at each timestep. SecMI follows overfitting assumption in where normally smaller errors, compared with hold-out samples. We consider both standard e.g., DDPM, text-to-image Latent Diffusion Models Stable Diffusion. Experimental demonstrate our methods precisely infer membership high confidence two across multiple different datasets. Code available https://github.com/jinhaoduan/SecMI.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....