Backdoor Two-Stream Video Models on Federated Learning

被引:1
作者
Zhao, Jing [1 ]
Yang, Hongwei [1 ]
He, Hui [1 ]
Peng, Jie [1 ]
Zhang, Weizhe [1 ]
Ni, Jiangqun [2 ]
Sangaiah, Arun kumar [3 ,4 ]
Castiglione, Anielo [5 ]
机构
[1] Harbin Inst Technol, 92 Xidazhi St, Harbin 150001, Peoples R China
[2] Sun Yat Sen Univ, 66 Gongchang Rd, Shenzhen 518107, Peoples R China
[3] Natl Yunlin Univ Sci & Technol, 123 Daxue Rd, Yunlin 640301, Taiwan
[4] Lebanese Amer Univ, 36-S-12, Byblos, Lebanon
[5] Univ Salerno, 132 Via Giovanni Paolo II, I-84084 Fisciano, SA, Italy
基金
中国国家自然科学基金;
关键词
Federated learning; backdoor attack; two-stream video recognition; ad- versarial attack;
D O I
10.1145/3651307
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video models on federated learning (FL) enable continual learning of the involved models for video tasks on end-user devices while protecting the privacy of end-user data. As a result, the security issues on FL, e.g., the backdoor attacks on FL and their defense have increasingly become the domains of extensive research in recent years. The backdoor attacks on FL are a class of poisoning attacks, in which an attacker, as one of the training participants, submits poisoned parameters and thus injects the backdoor into the global model after aggregation. Existing backdoor attacks against videos based on FL only poison RGB frames, which makes it that the attack could be easily mitigated by two-stream model neutralization. Therefore, it is a big challenge to manipulate the most advanced two-stream video model with a high success rate by poisoning only a small proportion of training data in the framework of FL. In this paper, a new backdoor attack scheme incorporating the rich spatial and temporal structures of video data is proposed, which injects the backdoor triggers into both the optical flow and RGB frames of video data through multiple rounds of model aggregations. In addition, the adversarial attack is utilized on the RGB frames to further boost the robustness of the attacks. Extensive experiments on real-world datasets verify that our methods outperform the state-of-the-art backdoor attacks and show better performance in terms of stealthiness and persistence.
引用
收藏
页数:20
相关论文
共 63 条
[41]   Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data [J].
Sattler, Felix ;
Wiedemann, Simon ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (09) :3400-3413
[42]  
Severi G, 2021, PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, P1487
[43]  
Shafahi A, 2018, ADV NEUR IN, V31
[44]  
Simonyan K, 2014, ADV NEUR IN, V27
[45]  
Soomro K, 2012, Arxiv, DOI [arXiv:1212.0402, DOI 10.48550/ARXIV.1212.0402]
[46]  
Sun ZT, 2019, Arxiv, DOI [arXiv:1911.07963, DOI 10.48550/ARXIV.1911.07963]
[47]  
Szegedy C, 2014, Arxiv, DOI arXiv:1312.6199
[48]  
Tsai YH, 2018, AAAI CONF ARTIF INTE, P7363
[49]  
Turner A., 2018, Clean-label backdoor attacks
[50]  
Turner A, 2019, Arxiv, DOI arXiv:1912.02771