Adversarial attacks on spiking convolutional neural networks for event-based vision.
FOS: Computer and information sciences
General Neuroscience
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
2800 General Neuroscience
Neurosciences. Biological psychiatry. Neuropsychiatry
02 engineering and technology
robust AI
spiking convolutional neural networks; adversarial examples; neuromorphic engineering; robust AI; dynamic vision sensors
adversarial examples
dynamic vision sensors
0202 electrical engineering, electronic engineering, information engineering
570 Life sciences; biology
spiking convolutional neural networks
neuromorphic engineering
10194 Institute of Neuroinformatics
RC321-571
Neuroscience
DOI:
10.5167/uzh-231251
Publication Date:
2022-12-22
AUTHORS (6)
ABSTRACT
Event-based dynamic vision sensors provide very sparse output in the form of spikes, which makes them suitable for low-power applications. Convolutional spiking neural networks model such event-based data and develop their full energy-saving potential when deployed on asynchronous neuromorphic hardware. Event-based vision being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received little attention so far. We show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data, and demonstrate smaller perturbation magnitudes at higher success rates than the current state-of-the-art algorithms. For the first time, we also verify the effectiveness of these perturbations directly on neuromorphic hardware. Finally, we discuss the properties of the resulting perturbations, the effect of adversarial training as a defense strategy, and future directions.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....