Backdoor Two-Stream Video Models on Federated Learning

被引:1
作者
Zhao, Jing [1 ]
Yang, Hongwei [1 ]
He, Hui [1 ]
Peng, Jie [1 ]
Zhang, Weizhe [1 ]
Ni, Jiangqun [2 ]
Sangaiah, Arun kumar [3 ,4 ]
Castiglione, Anielo [5 ]
机构
[1] Harbin Inst Technol, 92 Xidazhi St, Harbin 150001, Peoples R China
[2] Sun Yat Sen Univ, 66 Gongchang Rd, Shenzhen 518107, Peoples R China
[3] Natl Yunlin Univ Sci & Technol, 123 Daxue Rd, Yunlin 640301, Taiwan
[4] Lebanese Amer Univ, 36-S-12, Byblos, Lebanon
[5] Univ Salerno, 132 Via Giovanni Paolo II, I-84084 Fisciano, SA, Italy
基金
中国国家自然科学基金;
关键词
Federated learning; backdoor attack; two-stream video recognition; ad- versarial attack;
D O I
10.1145/3651307
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video models on federated learning (FL) enable continual learning of the involved models for video tasks on end-user devices while protecting the privacy of end-user data. As a result, the security issues on FL, e.g., the backdoor attacks on FL and their defense have increasingly become the domains of extensive research in recent years. The backdoor attacks on FL are a class of poisoning attacks, in which an attacker, as one of the training participants, submits poisoned parameters and thus injects the backdoor into the global model after aggregation. Existing backdoor attacks against videos based on FL only poison RGB frames, which makes it that the attack could be easily mitigated by two-stream model neutralization. Therefore, it is a big challenge to manipulate the most advanced two-stream video model with a high success rate by poisoning only a small proportion of training data in the framework of FL. In this paper, a new backdoor attack scheme incorporating the rich spatial and temporal structures of video data is proposed, which injects the backdoor triggers into both the optical flow and RGB frames of video data through multiple rounds of model aggregations. In addition, the adversarial attack is utilized on the RGB frames to further boost the robustness of the attacks. Extensive experiments on real-world datasets verify that our methods outperform the state-of-the-art backdoor attacks and show better performance in terms of stealthiness and persistence.
引用
收藏
页数:20
相关论文
共 63 条
[1]  
[Anonymous], 2009, Cifar-10
[2]  
Bagdasaryan E, 2021, PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, P1505
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Barni M, 2019, IEEE IMAGE PROC, P101, DOI [10.1109/icip.2019.8802997, 10.1109/ICIP.2019.8802997]
[5]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[8]  
Chen XY, 2017, Arxiv, DOI arXiv:1712.05526
[9]   FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare [J].
Chen, Yiqiang ;
Qin, Xin ;
Wang, Jindong ;
Yu, Chaohui ;
Gao, Wen .
IEEE INTELLIGENT SYSTEMS, 2020, 35 (04) :83-93
[10]   A Novel Multi-Sample Generation Method for Adversarial Attacks [J].
Duan, Mingxing ;
Li, Kenli ;
Deng, Jiayan ;
Xiao, Bin ;
Tian, Qi .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (04)