Generation and Countermeasures of adversarial examples on vision: a survey
0202 electrical engineering, electronic engineering, information engineering
02 engineering and technology
DOI:
10.1007/s10462-024-10841-z
Publication Date:
2024-07-08T09:02:00Z
AUTHORS (6)
ABSTRACT
AbstractRecent studies have found that deep learning models are vulnerable to adversarial examples, demonstrating that applying a certain imperceptible perturbation on clean examples can effectively deceive the well-trained and high-accuracy deep learning models. Moreover, the adversarial examples can achieve a considerable level of certainty with the attacked label. In contrast, human could barely discern the difference between clean and adversarial examples, which raised tremendous concern about robust and trustworthy deep learning techniques. In this survey, we reviewed the existence, generation, and countermeasures of adversarial examples in Computer Vision, to provide comprehensive coverage of the field with an intuitive understanding of the mechanisms and summarized the strengths, weaknesses, and major challenges. We hope this effort will ignite further interest in the community to solve current challenges and explore this fundamental area.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (255)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....