Privacy Attacks on Image AutoRegressive Models
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2502.02514
Publication Date:
2025-02-04
AUTHORS (4)
ABSTRACT
Image autoregressive (IAR) models have surpassed diffusion (DMs) in both image quality (FID: 1.48 vs. 1.58) and generation speed. However, their privacy risks remain largely unexplored. To address this, we conduct a comprehensive analysis comparing IARs to DMs. We develop novel membership inference attack (MIA) that achieves significantly higher success rate detecting training images (TPR@FPR=1%: 86.38% for 4.91% DMs). Using this MIA, perform dataset (DI) find require as few six samples detect membership, compared 200 DMs, indicating information leakage. Additionally, extract hundreds of from an IAR (e.g., 698 VAR-d30). Our findings highlight fundamental privacy-utility trade-off: while excel speed, they are more vulnerable attacks. This suggests incorporating techniques such per-token probability modeling using diffusion, could help mitigate IARs' risks. code is available at https://github.com/sprintml/privacy_attacks_against_iars.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....