A defense mechanism against label inference attacks in Vertical Federated Learning

被引:0
|
作者
Arazzi, Marco [1 ]
Nicolazzo, Serena [2 ]
Nocera, Antonino [1 ]
机构
[1] Univ Pavia, Dept Elect Comp & Biomed Engn, Via A Ferrata 5, I-27100 Pavia, PV, Italy
[2] Univ Milan, Dept Comp Sci, Via G Celoria 18, I-20133 Milan, MI, Italy
关键词
Federated learning; Vertical Federated Learning; VFL; Label inference attack; Knowledge distillation; k-anonymity;
D O I
10.1016/j.neucom.2025.129476
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vertical Federated Learning (VFL, for short) is a category of Federated Learning that is gaining increasing attention in the context of Artificial Intelligence. According to this paradigm, machine/deep learning models are trained collaboratively among parties with vertically partitioned data. Typically, in a VFL scenario, the labels of the samples are kept private from all parties except the aggregating server, that is, the label owner. However, recent work discovered that by exploiting the gradient information returned by the server to bottom models, with the knowledge of only a small set of auxiliary labels on a very limited subset of training data points, an adversary could infer the private labels. These attacks are known as label inference attacks in VFL. In our work, we propose a novel framework called KDk (knowledge distillation with k-anonymity) that combines knowledge distillation and k-anonymity to provide a defense mechanism against potential label inference attacks in a VFL scenario. Through an exhaustive experimental campaign, we demonstrate that by applying our approach, the performance of the analyzed label inference attacks decreases consistently, even by more than 60%, maintaining the accuracy of the whole VFL almost unaltered.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Label Inference Attacks Against Vertical Federated Learning
    Fu, Chong
    Zhang, Xuhong
    Ji, Shouling
    Chen, Jinyin
    Wu, Jingzheng
    Guo, Shanqing
    Zhou, Jun
    Liu, Alex X.
    Wang, Ting
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 1397 - 1414
  • [2] FLSG: A Novel Defense Strategy Against Inference Attacks in Vertical Federated Learning
    Fan, Kai
    Hong, Jingtao
    Li, Wenjie
    Zhao, Xingwen
    Li, Hui
    Yang, Yintang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02) : 1816 - 1826
  • [3] Threshold Filtering for Detecting Label Inference Attacks in Vertical Federated Learning
    Ding, Liansheng
    Bao, Haibin
    Lv, Qingzhe
    Zhang, Feng
    Zhang, Zhouyang
    Han, Jianliang
    Ding, Shuang
    ELECTRONICS, 2024, 13 (22)
  • [4] DLShield: A Defense Approach Against Dirty Label Attacks in Heterogeneous Federated Learning
    Sameera, K. M.
    Abhinav, M.
    Amal, P. P.
    Abhiram, T. Babu
    Abishek, Raj K.
    Amal, Tomichen
    Anainal, P.
    Vinod, P.
    Rafidha, Rehiman K. A.
    Mauro, Conti
    SECURITY, PRIVACY, AND APPLIED CRYPTOGRAPHY ENGINEERING, SPACE 2024, 2025, 15351 : 129 - 148
  • [5] Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning
    Zou, Tianyuan
    Liu, Yang
    Kang, Yan
    Liu, Wenhan
    He, Yuanqin
    Yi, Zhihao
    Yang, Qiang
    Zhang, Ya-Qin
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 1016 - 1027
  • [6] FEDCLEAN: A DEFENSE MECHANISM AGAINST PARAMETER POISONING ATTACKS IN FEDERATED LEARNING
    Kumar, Abhishek
    Khimani, Vivek
    Chatzopoulos, Dimitris
    Hui, Pan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4333 - 4337
  • [7] Digestive neural networks: A novel defense strategy against inference attacks in federated learning
    Lee, Hongkyu
    Kim, Jeehyeong
    Ahn, Seyoung
    Hussain, Rasheed
    Cho, Sunghyun
    Son, Junggab
    COMPUTERS & SECURITY, 2021, 109
  • [8] Beyond model splitting: Preventing label inference attacks in vertical federated learning with dispersed training
    Wang, Yilei
    Lv, Qingzhe
    Zhang, Huang
    Zhao, Minghao
    Sun, Yuhong
    Ran, Lingkai
    Li, Tao
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (05): : 2691 - 2707
  • [9] Beyond model splitting: Preventing label inference attacks in vertical federated learning with dispersed training
    Yilei Wang
    Qingzhe Lv
    Huang Zhang
    Minghao Zhao
    Yuhong Sun
    Lingkai Ran
    Tao Li
    World Wide Web, 2023, 26 : 2691 - 2707
  • [10] Data Quality Detection Mechanism Against Label Flipping Attacks in Federated Learning
    Jiang, Yifeng
    Zhang, Weiwen
    Chen, Yanxi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1625 - 1637