Towards Security Threats of Deep Learning Systems: A Survey

Strengths and weaknesses Popularity
DOI: 10.48550/arxiv.1911.12562 Publication Date: 2019-01-01
ABSTRACT
Deep learning has gained tremendous success and great popularity in the past few years. However, deep systems are suffering several inherent weaknesses, which can threaten security of models. learning's wide use further magnifies impact consequences. To this end, lots research been conducted with purpose exhaustively identifying intrinsic weaknesses subsequently proposing feasible mitigation. Yet clear about how these incurred effective attack approaches assaulting learning. In order to unveil aid development a robust system, we undertake an investigation on attacks towards learning, analyze conclude some findings multiple views. particular, focus four types associated threats learning: model extraction attack, inversion poisoning adversarial attack. For each type construct its essential workflow as well adversary capabilities goals. Pivot metrics devised for comparing approaches, by perform quantitative qualitative analyses. From analysis, have identified significant indispensable factors vector, e.g., reduce queries target models, what distance should be used measuring perturbation. We shed light 18 covering approaches' merits demerits, probability, deployment complexity prospects. Moreover, discuss other potential possible mitigation inspire relevant area.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....