Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning

被引:19
作者
Zou, Tianyuan [1 ]
Liu, Yang [2 ]
Kang, Yan [3 ]
Liu, Wenhan [4 ]
He, Yuanqin [3 ]
Yi, Zhihao [3 ]
Yang, Qiang [5 ]
Zhang, Ya-Qin [2 ]
机构
[1] Tsinghua Univ, Comp Sci & Technol, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Inst AI Ind Res AIR, Beijing 100084, Peoples R China
[3] Webank, Shenzhen 518052, Peoples R China
[4] Shandong Univ, Weihai 264209, Peoples R China
[5] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Kowloon, Hong Kong, Peoples R China
关键词
Task analysis; Training; Protocols; Collaborative work; Data models; Homomorphic encryption; Differential privacy; Vertical federated learning; label inference; label replacement; confusional autoencoder; privacy;
D O I
10.1109/TBDATA.2022.3192121
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a vertical federated learning (VFL) scenario where features and models are split into different parties, it has been shown that sample-level gradient information can be exploited to deduce crucial label information that should be kept secret. An immediate defense strategy is to protect sample-level messages communicated with Homomorphic Encryption (HE), exposing only batch-averaged local gradients to each party. In this paper, we show that even with HE-protected communication, private labels can still be reconstructed with high accuracy by gradient inversion attack, contrary to the common belief that batch-averaged information is safe to share under encryption. We then show that backdoor attack can also be conducted by directly replacing encrypted communicated messages without decryption. To tackle these attacks, we propose a novel defense method, Confusional AutoEncoder (termed CAE), which is based on autoencoder and entropy regularization to disguise true labels. To further defend attackers with sufficient prior label knowledge, we introduce DiscreteSGD-enhanced CAE (termed DCAE), and show that DCAE significantly boosts the main task accuracy than other known methods when defending various label inference attacks.
引用
收藏
页码:1016 / 1027
页数:12
相关论文
共 50 条
[41]   Improving Availability of Vertical Federated Learning: Relaxing Inference on Non-overlapping Data [J].
Ren, Zhenghang ;
Yang, Liu ;
Chen, Kai .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
[42]   Facilitating Early-Stage Backdoor Attacks in Federated Learning With Whole Population Distribution Inference [J].
Liu, Tian ;
Hu, Xueyang ;
Shu, Tao .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (12) :10385-10399
[43]   FedLED: Label-Free Equipment Fault Diagnosis With Vertical Federated Transfer Learning [J].
Shen, Jie ;
Yang, Shusen ;
Zhao, Cong ;
Ren, Xuebin ;
Zhao, Peng ;
Yang, Yuqian ;
Han, Qing ;
Wu, Shuaijun .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 :1-10
[44]   Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes [J].
Tan, Qingyu ;
Li, Yan ;
Shin, Byeong-Seok .
CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2025, 143 (02) :2417-2428
[45]   FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks [J].
Castillo, Jorge ;
Rieger, Phillip ;
Fereidooni, Hossein ;
Chen, Qian ;
Sadeghi, Ahmad-Reza .
39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, :647-661
[46]   Efficient Membership Inference Attacks against Federated Learning via Bias Differences [J].
Zhang, Liwei ;
Li, Linghui ;
Li, Xiaoyong ;
Cai, Binsi ;
Gao, Yali ;
Dou, Ruobin ;
Chen, Luying .
PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, :222-235
[47]   Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning [J].
Yu, Chong ;
Meng, Zhenyu ;
Zhang, Wenmiao ;
Lei, Lei ;
Ni, Jianbing ;
Zhang, Kuan ;
Zhao, Hai .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (06) :10913-10927
[48]   Practical Feature Inference Attack in Vertical Federated Learning During Prediction in Artificial Internet of Things [J].
Yang, Ruikang ;
Ma, Jianfeng ;
Zhang, Junying ;
Kumari, Saru ;
Kumar, Sachin ;
Rodrigues, Joel J. P. C. .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01) :5-16
[49]   KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning [J].
Chen, Renlong ;
Xia, Hui ;
Wang, Kai ;
Xu, Shuo ;
Zhang, Rui .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 166
[50]   Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning [J].
Gardin Assumpcao, Nicolas Riccieri ;
Villas, Leandro .
2024 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS, ISCC 2024, 2024,