Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
FOS: Computer and information sciences
Computer Science - Cryptography and Security
Computer Science - Computation and Language
0202 electrical engineering, electronic engineering, information engineering
02 engineering and technology
Cryptography and Security (cs.CR)
Computation and Language (cs.CL)
01 natural sciences
0105 earth and related environmental sciences
DOI:
10.18653/v1/2021.emnlp-main.241
Publication Date:
2021-12-17T03:56:42Z
AUTHORS (6)
ABSTRACT
\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers. When the triggers are activated, even the fine-tuned model will predict pre-defined labels, causing a security threat. These backdoors generated by the poisoning methods can be erased by changing hyper-parameters during fine-tuning or detected by finding the triggers. In this paper, we propose a stronger weight-poisoning attack method that introduces a layerwise weight poisoning strategy to plant deeper backdoors; we also introduce a combinatorial trigger that cannot be easily detected. The experiments on text classification tasks show that previous defense methods cannot resist our weight-poisoning method, which indicates that our method can be widely applied and may provide hints for future model robustness studies.<br/>Accepted by EMNLP2021 main conference<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (40)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....