Defeating deep learning based de-anonymization attacks with adversarial example

被引:1
作者
Yin, Haoyu [1 ]
Liu, Yingjian [1 ]
Li, Yue [1 ]
Guo, Zhongwen [1 ]
Wang, Yu [2 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Shandong, Peoples R China
[2] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Website fingerprinting; Adversarial example; Privacy; Deep learning; Anonymity;
D O I
10.1016/j.jnca.2023.103733
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning (DL) technologies bring new threats to network security. Website fingerprinting attacks (WFA) using DL models can distinguish victim's browsing activities protected by anonymity technologies. Unfortunately, traditional countermeasures (website fingerprinting defenses, WFD) fail to preserve privacy against DL models. In this paper, we apply adversarial example technology to implement new WFD with static analyzing (SA) and dynamic perturbation (DP) settings. Although DP setting is close to a real-world scenario, its supervisions are almost unavailable due to the uncertainty of upcoming traffics and the difficulty of dependency analysis over time. SA setting relaxes the real-time constraints in order to implement WFD under a supervised learning perspective. We propose Greedy Injection Attack (GIA), a novel adversarial method for WFD under SA setting based on zero-injection vulnerability test. Furthermore, Sniper is proposed to mitigate the computational cost by using a DL model to approximate zero-injection test. FCNSniper and RNNSniper are designed for SA and DP settings respectively. Experiments show that FCNSniper decreases classification accuracy of the state-of-the-art WFA model by 96.57% with only 2.29% bandwidth overhead. The learned knowledge can be efficiently transferred into RNNSniper. As an indirect adversarial example attack approach, FCNSniper can be well generalized to different target WFA models and datasets without suffering fatal failures from adversarial training.
引用
收藏
页数:12
相关论文
共 39 条
[1]  
Abusnaina A, 2020, IEEE INFOCOM SER, P2459, DOI [10.1109/infocom41043.2020.9155465, 10.1109/INFOCOM41043.2020.9155465]
[2]   Adversarial Kendall's Model Towards Containment of Distributed Cyber-Threats [J].
Addesso, Paolo ;
Barni, Mauro ;
Di Mauro, Mario ;
Matta, Vincenzo .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :3604-3619
[3]  
[Anonymous], 2016, ASIA PACIFIC ADV NET
[4]  
Bhat Sanjit, 2019, Proceedings on Privacy Enhancing Technologies, V2019, P292, DOI 10.2478/popets-2019-0070
[5]  
Chen M., 2021, COMPUT NETW, P1
[6]   Few-shot Website Fingerprinting attack with Meta-Bias Learning [J].
Chen, Mantun ;
Wang, Yongjun ;
Zhu, Xiatian .
PATTERN RECOGNITION, 2022, 130
[7]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[8]  
Cho K., 2014, C EMP METH NAT LANG, DOI DOI 10.3115/V1/W14-4012
[9]  
Dingledine R, 2004, USENIX ASSOCIATION PROCEEDINGS OF THE 13TH USENIX SECURITY SYMPOSIUM, P303
[10]   A Game-Theoretic Analysis of Adversarial Classification [J].
Dritsoula, Lemonia ;
Loiseau, Patrick ;
Musacchio, John .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2017, 12 (12) :3094-3109