Privacy-Preserving Recommendation with Debiased Obfuscaiton

被引:2
作者
Lin, Chennan [1 ]
Liu, Baisong [1 ]
Zhang, Xueyuan [1 ]
Wang, Zhiye [1 ]
Hu, Ce [1 ]
Luo, Linze [1 ]
机构
[1] Ningbo Univ, Fac Elect Engn & Comp Sci, Ningbo, Peoples R China
来源
2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM | 2022年
关键词
Recommender systems; Privacy-preserving; Obfuscation; Attribute inference; Gender bias;
D O I
10.1109/TrustCom56396.2022.00086
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As people enjoy the personalized services recommended by Recommender Systems (RSs), the privacy disclosure risk increases with frequent interactions. Malicious adversary often collects public information online to infer private information for illicit profit. As privacy concerns grew, researchers introduced data obfuscation into recommender systems. However, there still exists several limitations in current work. First, although the existing methods effectively reduce the risk of privacy disclosure, they can be detrimental to the quality of the recommendation service. Second, a range of practical issues under the application of recommendation systems are not considered, e.g., long-tail, density, etc. To address those challenges, we propose a novel framework named Want User Defending Inference (WUDI), a high-performance privacy-preserving debiased framework based on data obfuscation. Unlike the original strategies, i.e., adding or removing user ratings, we introduced some novel strategies to generate an obfuscated matrix. Firstly, we define a new method called Cluster Recommend for alleviating the long-tail skewness and data sparsity in RSs. Then we investigate the gender bias in obfuscation and apply a bias mitigating strategy to RSs. Experiments on public datasets demonstrate that WUDI can outperform the state-of-the-art baselines in obfuscation.
引用
收藏
页码:590 / 597
页数:8
相关论文
共 34 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Fairness metrics and bias mitigation strategies for rating predictions [J].
Ashokan, Ashwathy ;
Haas, Christian .
INFORMATION PROCESSING & MANAGEMENT, 2021, 58 (05)
[3]   Bias in Search and Recommender Systems [J].
Baeza-Yates, Ricardo .
RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, :2-2
[4]   Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds [J].
Bassily, Raef ;
Smith, Adam ;
Thakurta, Abhradeep .
2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014), 2014, :464-473
[5]  
Bi B, 2013, P 22 INT C WORLD WID, P131, DOI DOI 10.1145/2488388.2488401
[6]   Collective Data-Sanitization for Preventing Sensitive Information Inference Attacks in Social Networks [J].
Cai, Zhipeng ;
He, Zaobo ;
Guan, Xin ;
Li, Yingshu .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2018, 15 (04) :577-590
[7]   "You Might Also Like:" Privacy Risks of Collaborative Filtering [J].
Calandrino, Joseph A. ;
Kilzer, Ann ;
Narayanan, Arvind ;
Felten, Edward W. ;
Shmatikov, Vitaly .
2011 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2011), 2011, :231-246
[8]  
Chandrasekaran V, 2020, Arxiv, DOI arXiv:2003.08861
[9]  
Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
[10]  
Chen T, 2014, LECT NOTES COMPUT SC, V8555, P42, DOI 10.1007/978-3-319-08506-7_3