Generative Extraction of Audio Classifiers for Speaker Identification
Generative model
Proxy (statistics)
Vulnerability
Identification
DOI:
10.48550/arxiv.2207.12816
Publication Date:
2022-01-01
AUTHORS (5)
ABSTRACT
It is perhaps no longer surprising that machine learning models, especially deep neural networks, are particularly vulnerable to attacks. One such vulnerability has been well studied model extraction: a phenomenon in which the attacker attempts steal victim's by training surrogate mimic decision boundaries of victim model. Previous works have demonstrated effectiveness an attack and its devastating consequences, but much this work done primarily for image text processing tasks. Our first attempt perform extraction on {\em audio classification models}. We motivated whose goal behavior trained identify speaker. This problematic security-sensitive domains as biometric authentication. find prior techniques, where \textit{naively} uses proxy dataset potential model, fail. therefore propose use generative create sufficiently large diverse pool synthetic queries. our approach able extract \texttt{LibriSpeech} using queries synthesized with based off \texttt{VoxCeleb}; we achieve test accuracy 84.41\% budget 3 million
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....