Defending Against Membership Inference Attack for Counterfactual Federated Recommendation With Differentially Private Representation Learning

被引:0
作者
Liu, Xiuwen [1 ]
Chen, Yanjiao [2 ]
Pang, Shanchen [1 ]
机构
[1] China Univ Petr East China, Qingdao Inst Software, Coll Comp Sci & Technol, Qingdao 266580, Shandong, Peoples R China
[2] Zhejiang Univ, Coll Elect Engn, Hangzhou 310027, Zhejiang, Peoples R China
关键词
Privacy; Representation learning; Recommender systems; Differential privacy; Biological system modeling; Training; Planning; Membership inference attack and defense; differential privacy; causal inference; federated recommendation;
D O I
10.1109/TIFS.2024.3453031
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
When it comes to the marriage of federated learning and personalized recommendation services (FedRec), characterizing user-item interaction behaviors is a long-standing and unresolved issue, highlighting the growing data privacy concerns due to the inherent openness of recommender systems. As the new interaction-level membership inference attacks on FedRecs have recently surfaced, quite possibly such adversarial attacks act as the hidden confounders lying behind the interactive recommendation, resulting in the obstruction of the causal effect disentanglement on long-term user satisfaction. As such, tailored to the specifics of private learning, we propose a counterfactual interactive recommendation system that builds a differentially private representation learning based defender (CIRDP) to capture and mitigate the adversarial threats, augmenting causal inference-based interactive recommendation of FedRecs. When characterizing interaction-level membership inference attacks of the hidden eavesdropping adversary as the primary cause of adversarial effect on user satisfaction, CIRDP incorporates causal inference-augmented offline reinforcement learning (offline RL) into FedRecs. CIRDP innovatively provides counterfactual satisfaction by optimizing a sensitivity-guided disentangled representation module with an innovative two-fold mutual information objective. As such, CIRDP introduces a differentially private representation learning based defender, guaranteeing interaction behavior-level differential privacy (DP) with a significant reduction in privacy costs. Extensive comparisons demonstrate CIRDP's superiority over the state-of-the-art baselines in reducing inference attack threats and improving long-term success in the interactive recommendation.
引用
收藏
页码:8037 / 8051
页数:15
相关论文
共 55 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Ammad-ud-din M, 2019, Arxiv, DOI arXiv:1901.09888
  • [3] Balle B, 2018, PR MACH LEARN RES, V80
  • [4] Chen Chaochao, 2020, arXiv
  • [5] Cheng PY, 2020, PR MACH LEARN RES, V119
  • [6] Creager E, 2019, PR MACH LEARN RES, V97
  • [7] Quantifying Privacy Leakage in Graph Embedding
    Duddu, Vasisht
    Boutet, Antoine
    Shejwalkar, Virat
    [J]. PROCEEDINGS OF THE 17TH EAI INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES (MOBIQUITOUS 2020), 2021, : 76 - 85
  • [8] Efron B., 2024, Handbook of Bayesian, Fiducial, and Frequentist Inference, P8
  • [9] Ekstrand M.D., 2018, Conference on Fairness, Accountability and Transparency, V81, P172
  • [10] Exploring Author Gender in Book Rating and Recommendation
    Ekstrand, Michael D.
    Tian, Mucun
    Kazi, Mohammed R. Imran
    Mehrpouyan, Hoda
    Kluver, Daniel
    [J]. 12TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS), 2018, : 242 - 250