On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis

FOS: Computer and information sciences Computer Science - Machine Learning Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2502.13191 Publication Date: 2025-02-18
ABSTRACT
Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs) -- a major threat where an adversary attempts determine whether given sample was part training dataset. While prior work suggests that may offer inherent due discrete, event-driven nature, find its resilience diminishes as latency (T) increases. Furthermore, introduce input dropout strategy under black box setting, significantly enhances membership inference SNNs. Our findings challenge assumption inherently more secure, even though they expected be better, our results reveal exhibit vulnerabilities equally comparable Artificial (ANNs). code is available at https://anonymous.4open.science/r/MIA_SNN-3610.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....