Hidden Backdoors in Human-Centric Language Models

被引:67
作者
Li, Shaofeng [1 ]
Liu, Hui [1 ]
Dong, Tian [1 ]
Zhao, Benjamin Zi Hao [2 ,3 ]
Xue, Minhui [4 ]
Zhu, Haojin [1 ]
Lu, Jialiang [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Univ New South Wales, Sydney, NSW, Australia
[3] CSIRO Data61, Sydney, NSW, Australia
[4] Univ Adelaide, Adelaide, SA, Australia
来源
CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY | 2021年
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
backdoor attacks; natural language processing; homographs; text generation;
D O I
10.1145/3460120.3484576
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Natural language processing (NLP) systems have been proven to be vulnerable to backdoor attacks, whereby hidden features (backdoors) are trained into a language model and may only be activated by specific inputs (called triggers), to trick the model into producing unexpected behaviors. In this paper, we create covert and natural triggers for textual backdoor attacks, hidden backdoors, where triggers can fool both modern language models and human inspection. We deploy our hidden backdoors through two state-of-the-art trigger embedding methods. The first approach via homograph replacement, embeds the trigger into deep neural networks through the visual spoofing of lookalike character replacement. The second approach uses subtle differences between text generated by language models and real natural text to produce trigger sentences with correct grammar and highfluency. We demonstrate that the proposed hidden backdoors can be effective across three downstream security-critical NLP tasks, representative of modern human-centric NLP systems, including toxic comment detection, neural machine translation (NMT), and question answering (QA). Our two hidden backdoor attacks can achieve an Attack Success Rate (ASR) of at least 97% with an injection rate of only 3% in toxic comment detection, 95.1% ASR in NMT with less than 0.5% injected data, and finally 91.12% ASR against QA updated with only 27 poisoning data samples on a model previously trained with 92,024 samples (0.029%). We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.
引用
收藏
页码:3123 / 3140
页数:18
相关论文
共 74 条
[1]  
[Anonymous], 2016, P WWW
[2]  
Bagdasaryan Eugene, 2021, P USENIX SEC
[3]  
Beguelin Santiago Zanella, 2020, P CCS
[4]  
Bengio Y, 2001, ADV NEUR IN, V13, P932
[5]  
Cao Xiaoyu, 2021, P USENIX SEC
[6]  
Carlini Nicholas, 2020, 201207805 ARXIV
[7]  
Chen Xiaoyi, 2020, 200601043 ARXIV
[8]  
Cheng Siyuan, 2021, P AAAI
[9]   Aggregation Frequency Response Modeling for Wind Power Plants With Primary Frequency Regulation Service [J].
Dai, Jianfeng ;
Tang, Yi ;
Wang, Qi ;
Jiang, Ping .
IEEE ACCESS, 2019, 7 :108561-108570
[10]  
Dathathri Sumanth, 2020, P ICLR