Manipulating Federated Recommender Systems: Poisoning with Synthetic Users and Its Countermeasures

被引:11
|
作者
Yuan, Wei [1 ]
Quoc Viet Hung Nguyen [2 ]
He, Tieke [3 ]
Chen, Liang [4 ]
Yin, Hongzhi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
[2] Griffith Univ, Gold Coast, Australia
[3] Nanjing Univ, Nanjing, Peoples R China
[4] Sun Yat Sen Univ, Guangzhou, Peoples R China
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
基金
澳大利亚研究理事会;
关键词
Federated Recommender System; Poisoning Attack and Defense;
D O I
10.1145/3539618.3591722
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Recommender Systems (FedRecs) are considered privacy-preserving techniques to collaboratively learn a recommendation model without sharing user data. Since all participants can directly influence the systems by uploading gradients, FedRecs are vulnerable to poisoning attacks of malicious clients. However, most existing poisoning attacks on FedRecs are either based on some prior knowledge or with less effectiveness. To reveal the real vulnerability of FedRecs, in this paper, we present a new poisoning attack method to manipulate target items' ranks and exposure rates effectively in the top-K recommendation without relying on any prior knowledge. Specifically, our attack manipulates target items' exposure rate by a group of synthetic malicious users who upload poisoned gradients considering target items' alternative products. We conduct extensive experiments with two widely used FedRecs (Fed-NCF and Fed-LightGCN) on two real-world recommendation datasets. The experimental results show that our attack can significantly improve the exposure rate of unpopular target items with extremely fewer malicious users and fewer global epochs than state-of-the-art attacks. In addition to disclosing the security hole, we design a novel countermeasure for poisoning attacks on FedRecs. Specifically, we propose a hierarchical gradient clipping with sparsified updating to defend against existing poisoning attacks. The empirical results demonstrate that the proposed defending mechanism improves the robustness of FedRecs.
引用
收藏
页码:1690 / 1699
页数:10
相关论文
共 50 条
  • [1] Manipulating Visually Aware Federated Recommender Systems and Its Countermeasures
    Yuan, Wei
    Yuan, Shilong
    Yang, Chaoqun
    Hung, Nguyen Quoc Viet
    Yin, Hongzhi
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (03)
  • [2] Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
    Nguyen, Thanh Toan
    Hung, Nguyen Quoc Viet
    Nguyen, Thanh Tam
    Huynh, Thanh Trung
    Nguyen, Thanh Thi
    Weidlich, Matthias
    Yin, Hongzhi
    ACM COMPUTING SURVEYS, 2025, 57 (01)
  • [3] PipAttack: Poisoning Federated Recommender Systems for Manipulating Item Promotion
    Zhang, Shijie
    Yin, Hongzhi
    Chen, Tong
    Huang, Zi
    Quoc Viet Hung Nguyen
    Cui, Lizhen
    WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 1415 - 1423
  • [4] Poisoning Decentralized Collaborative Recommender System and Its Countermeasures
    Zheng, Ruiqi
    Qu, Liang
    Chen, Tong
    Zheng, Kai
    Shi, Yuhui
    Yin, Hongzhi
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1712 - 1721
  • [5] ClusterPoison: Poisoning Attacks on Recommender Systems with Limited Fake Users
    Wang, Yanling
    Liu, Yuchen
    Wang, Qian
    Wang, Cong
    IEEE COMMUNICATIONS MAGAZINE, 2024, 62 (11) : 136 - 142
  • [6] Manipulating vulnerability: Poisoning attacks and countermeasures in federated cloud-edge-client learning for image classification
    Zhao, Yaru
    Zhang, Jianbiao
    Cao, Yihao
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [7] Federated Conversational Recommender Systems
    Lin, Allen
    Wang, Jianling
    Zhu, Ziwei
    Caverlee, James
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT V, 2024, 14612 : 50 - 65
  • [8] Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
    Jagielski, Matthew
    Oprea, Alina
    Biggio, Battista
    Liu, Chang
    Nita-Rotaru, Cristina
    Li, Bo
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, : 19 - 35
  • [9] Recommender systems with selfish users
    Maria Halkidi
    Iordanis Koutsopoulos
    Knowledge and Information Systems, 2020, 62 : 3239 - 3262
  • [10] Recommender systems with selfish users
    Halkidi, Maria
    Koutsopoulos, Iordanis
    KNOWLEDGE AND INFORMATION SYSTEMS, 2020, 62 (08) : 3239 - 3262