A machine and human reader study on AI diagnosis model safety under attacks of adversarial images
Adversarial machine learning
Generative adversarial network
DOI:
10.1038/s41467-021-27577-x
Publication Date:
2021-12-14T11:03:55Z
AUTHORS (9)
ABSTRACT
Abstract While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate behaviors an diagnosis under adversarial images generated by Generative Adversarial Network (GAN) evaluate effects on human experts when visually identifying potential images. Our GAN makes intentional modifications diagnosis-sensitive contents mammogram in deep learning-based computer-aided (CAD) breast cancer. In our experiments samples fool AI-CAD output wrong 69.1% cases that initially correctly classified model. Five imaging radiologists identify 29%-71% samples. suggests imperative need for continuing model’s developing defensive solutions against attacks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (35)
CITATIONS (33)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....