Unauthorized AI cannot recognize me: Reversible adversarial example
FOS: Computer and information sciences
Computer Science - Machine Learning
03 medical and health sciences
0302 clinical medicine
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
0202 electrical engineering, electronic engineering, information engineering
02 engineering and technology
Machine Learning (cs.LG)
DOI:
10.1016/j.patcog.2022.109048
Publication Date:
2022-09-24T14:56:19Z
AUTHORS (5)
ABSTRACT
arXiv admin note: text overlap with arXiv:1806.09186<br/>In this study, we propose a new methodology to control how user's data is recognized and used by AI via exploiting the properties of adversarial examples. For this purpose, we propose reversible adversarial example (RAE), a new type of adversarial example. A remarkable feature of RAE is that the image can be correctly recognized and used by the AI model specified by the user because the authorized AI can recover the original image from the RAE exactly by eliminating adversarial perturbation. On the other hand, other unauthorized AI models cannot recognize it correctly because it functions as an adversarial example. Moreover, RAE can be considered as one type of encryption to computer vision since reversibility guarantees the decryption. To realize RAE, we combine three technologies, adversarial example, reversible data hiding for exact recovery of adversarial perturbation, and encryption for selective control of AIs who can remove adversarial perturbation. Experimental results show that the proposed method can achieve comparable attack ability with the corresponding adversarial attack method and similar visual quality with the original image, including white-box attacks and black-box attacks.<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (43)
CITATIONS (25)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....