Generation and Countermeasures of adversarial examples on vision: a survey
Trustworthiness
Strengths and weaknesses
DOI:
10.1007/s10462-024-10841-z
Publication Date:
2024-07-08T09:02:00Z
AUTHORS (6)
ABSTRACT
Abstract Recent studies have found that deep learning models are vulnerable to adversarial examples, demonstrating applying a certain imperceptible perturbation on clean examples can effectively deceive the well-trained and high-accuracy models. Moreover, achieve considerable level of certainty with attacked label. In contrast, human could barely discern difference between which raised tremendous concern about robust trustworthy techniques. this survey, we reviewed existence, generation, countermeasures in Computer Vision, provide comprehensive coverage field an intuitive understanding mechanisms summarized strengths, weaknesses, major challenges. We hope effort will ignite further interest community solve current challenges explore fundamental area.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (255)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....