Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning

被引:17
作者
Zou, Tianyuan [1 ]
Liu, Yang [2 ]
Kang, Yan [3 ]
Liu, Wenhan [4 ]
He, Yuanqin [3 ]
Yi, Zhihao [3 ]
Yang, Qiang [5 ]
Zhang, Ya-Qin [2 ]
机构
[1] Tsinghua Univ, Comp Sci & Technol, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Inst AI Ind Res AIR, Beijing 100084, Peoples R China
[3] Webank, Shenzhen 518052, Peoples R China
[4] Shandong Univ, Weihai 264209, Peoples R China
[5] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Kowloon, Hong Kong, Peoples R China
关键词
Task analysis; Training; Protocols; Collaborative work; Data models; Homomorphic encryption; Differential privacy; Vertical federated learning; label inference; label replacement; confusional autoencoder; privacy;
D O I
10.1109/TBDATA.2022.3192121
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a vertical federated learning (VFL) scenario where features and models are split into different parties, it has been shown that sample-level gradient information can be exploited to deduce crucial label information that should be kept secret. An immediate defense strategy is to protect sample-level messages communicated with Homomorphic Encryption (HE), exposing only batch-averaged local gradients to each party. In this paper, we show that even with HE-protected communication, private labels can still be reconstructed with high accuracy by gradient inversion attack, contrary to the common belief that batch-averaged information is safe to share under encryption. We then show that backdoor attack can also be conducted by directly replacing encrypted communicated messages without decryption. To tackle these attacks, we propose a novel defense method, Confusional AutoEncoder (termed CAE), which is based on autoencoder and entropy regularization to disguise true labels. To further defend attackers with sufficient prior label knowledge, we introduce DiscreteSGD-enhanced CAE (termed DCAE), and show that DCAE significantly boosts the main task accuracy than other known methods when defending various label inference attacks.
引用
收藏
页码:1016 / 1027
页数:12
相关论文
共 50 条
[21]   An adaptive robust defending algorithm against backdoor attacks in federated learning [J].
Wang, Yongkang ;
Zhai, Di-Hua ;
He, Yongping ;
Xia, Yuanqing .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 :118-131
[22]   Defending against Poisoning Backdoor Attacks on Federated Meta-learning [J].
Chen, Chien-Lun ;
Babakniya, Sara ;
Paolieri, Marco ;
Golubchik, Leana .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
[23]   Defending Data Inference Attacks Against Machine Learning Models by Mitigating Prediction Distinguishability [J].
Yang, Ziqi ;
Zhu, Yiran ;
Wan, Jie ;
Xiang, ChuXiao ;
Tang, Tong ;
Wang, Yilin ;
Xu, Ruite ;
Wang, Lijin ;
Zhang, Fan ;
Xu, Jiarong ;
Qin, Zhan .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (03) :2687-2704
[24]   Data Quality Detection Mechanism Against Label Flipping Attacks in Federated Learning [J].
Jiang, Yifeng ;
Zhang, Weiwen ;
Chen, Yanxi .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :1625-1637
[25]   Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning [J].
Abbasi Tadi, Ali ;
Dayal, Saroj ;
Alhadidi, Dima ;
Mohammed, Noman .
INFORMATION, 2023, 14 (11)
[26]   Toward Few-Label Vertical Federated Learning [J].
Zhang, Lei ;
Fu, Lele ;
Liu, Chen ;
Yang, Zhao ;
Yang, Jinghua ;
Zheng, Zibin ;
Chen, Chuan .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (07)
[27]   Label Privacy Source Coding in Vertical Federated Learning [J].
Gao, Dashan ;
Wan, Sheng ;
Gu, Hanlin ;
Fan, Lixin ;
Yao, Xin ;
Yang, Qiang .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT I, ECML PKDD 2024, 2024, 14941 :313-331
[28]   Graph-Fraudster: Adversarial Attacks on Graph Neural Network-Based Vertical Federated Learning [J].
Chen, Jinyin ;
Huang, Guohan ;
Zheng, Haibin ;
Yu, Shanqing ;
Jiang, Wenrong ;
Cui, Chen .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (02) :492-506
[29]   FLGuardian: Defending Against Model Poisoning Attacks via Fine-Grained Detection in Federated Learning [J].
Zhou, Xingjie ;
Chen, Xianzhang ;
Liu, Shukan ;
Fan, Xuehong ;
Sun, Qiao ;
Chen, Lin ;
Qiu, Meikang ;
Xiang, Tao .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 :5396-5410
[30]   Practical and General Backdoor Attacks Against Vertical Federated Learning [J].
Xuan, Yuexin ;
Chen, Xiaojun ;
Zhao, Zhendong ;
Tang, Bisheng ;
Dong, Ye .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 :402-417