LDP-Purifier: Defending against Poisoning Attacks in Local Differential Privacy

被引:0
|
作者
Wang, Leixia [1 ]
Yee, Qingqing [2 ]
Hu, Haibo [2 ]
Meng, Xiaofeng [1 ]
Huang, Kai [3 ]
机构
[1] Renmin Univ China, Sch Informat, Beijing, Peoples R China
[2] Hong Kong Polytech Univ, Dept Elect & Elect Engn, Hung Hom, Hong Kong, Peoples R China
[3] Macau Univ Sci & Technol, Sch Comp Sci & Engn, Cotai, Macau, Peoples R China
基金
中国国家自然科学基金;
关键词
LDP; Poisoning attack; Histogram estimation;
D O I
10.1007/978-981-97-5562-2_14
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Local differential privacy provides strong user privacy protection but is vulnerable to poisoning attacks launched by malicious users, leading to contaminative estimates. Although various works explore attacks with different manipulation targets, a practical and relatively general defense has remained elusive. In this paper, we address this problem in basic histogram estimation scenarios. We model adversaries as Byzantine users who can collaborate to maximize their attack goals. From the perspective of attackers' capability, we analyze the impact of poisoning attacks on data utility and introduce a significant threat the maximal loss attack (MLA). Considering that a high-utility-damage attack would break the smoothness of histograms, we propose the defense method, LDP-Purifier, to sterilize the poisoned histograms. Our extensive experiments validate the effectiveness of the LDP-Purifier, show-casing its ability to significantly suppress estimation errors caused by various attacks.
引用
收藏
页码:221 / 231
页数:11
相关论文
共 50 条
  • [1] Efficient Defenses Against Output Poisoning Attacks on Local Differential Privacy
    Song, Shaorui
    Xu, Lei
    Zhu, Liehuang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5506 - 5521
  • [2] LDPGuard: Defenses Against Data Poisoning Attacks to Local Differential Privacy Protocols
    Huang, Kai
    Ouyang, Gaoya
    Ye, Qingqing
    Hu, Haibo
    Zheng, Bolong
    Zhao, Xi
    Zhang, Ruiyuan
    Zhou, Xiaofang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 3195 - 3209
  • [3] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [4] Data Poisoning Attacks to Local Differential Privacy Protocols
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, 2021, : 947 - 964
  • [5] Robust Estimation Method against Poisoning Attacks for Key-Value Data with Local Differential Privacy
    Horigome, Hikaru
    Kikuchi, Hiroaki
    Fujita, Masahiro
    Yu, Chia-Mu
    APPLIED SCIENCES-BASEL, 2024, 14 (14):
  • [6] Local Differential Privacy Protocol for Making Key-Value Data Robust Against Poisoning Attacks
    Horigome, Hikaru
    Kikuchi, Hiroaki
    Yu, Chia-Mu
    MODELING DECISIONS FOR ARTIFICIAL INTELLIGENCE, MDAI 2023, 2023, 13890 : 241 - 252
  • [7] AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
    Gong, Zirui
    Shen, Liyue
    Zhang, Yanjun
    Zhang, Leo Yu
    Wang, Jingwei
    Bai, Guangdong
    Xiang, Yong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1241 - 1250
  • [8] Defending Against Targeted Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 198 - 207
  • [9] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475
  • [10] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13