Evaluating the Robustness of Defense Mechanisms based on AutoEncoder Reconstructions against Carlini-Wagner Adversarial Attacks
Autoencoder
Robustness
Adversarial machine learning
DOI:
10.7557/18.5173
Publication Date:
2020-02-26T14:55:55Z
AUTHORS (3)
ABSTRACT
Adversarial Examples represent a serious problem affecting the security of machine learning systems. In this paper we focus on defense mechanism based reconstructing images before classification using an autoencoder. We experiment several types autoencoders and evaluate impact strategies such as injecting noise in input during training latent space at inference time.We tested models adversarial examples generated with Carlini-Wagner attack, white-box scenario stacked system composed by autoencoder classifier.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (7)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....