Federating recommendations using differentially private prototypes

被引:18
作者
Ribero, Monica [1 ]
Henderson, Jette [2 ]
Williamson, Sinead [2 ,3 ,4 ]
Vikalo, Haris [1 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[2] CognitiveScale, Austin, TX USA
[3] Univ Texas Austin, Dept Stat, Austin, TX USA
[4] Univ Texas Austin, Dept Informat Risk & Operat Management, Austin, TX USA
关键词
Recommender systems; Differential Privacy; Federated Learning; Cross-Silo Federated Learning; Matrix Factorization; MATRIX FACTORIZATION;
D O I
10.1016/j.patcog.2022.108746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning methods exploit similarities in users' activity patterns to provide recommendations in applications across a wide range of fields including entertainment, dating, and commerce. However, in domains that demand protection of personally sensitive data, such as medicine or banking, how can we learn recommendation models without accessing the sensitive data and without inadvertently leaking private information? Many situations in the medical field prohibit centralizing the data from different hospitals and thus require learning from information kept in separate databases. We propose a new federated approach to learning global and local private models for recommendation without collecting raw data, user statistics, or information about personal preferences. Our method produces a set of locally learned prototypes that allow us to infer global behavioral patterns while providing differential privacy guarantees for users in any database of the system. By requiring only two rounds of communication, we both reduce the communication costs and avoid excessive privacy loss associated with typical federated learning iterative procedures. We test our framework on synthetic data, real federated medical data, and a federated version of Movielens ratings. We show that local adaptation of the global model allows the proposed method to outperform centralized matrix-factorization-based recommender system models, both in terms of the accuracy of matrix reconstruction and in terms of the relevance of recommendations, while maintaining provable privacy guarantees. We also show that our method is more robust and has smaller variance than individual models learned by independent entities. (c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 73 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Ammad-Ud Din M., 2019, FEDERATED COLLABORAT
[3]  
Anelli V.W., 2020, FEDERANK USER CONTRO
[4]   Joint interaction with context operation for collaborative filtering [J].
Bai, Peizhen ;
Ge, Yan ;
Liu, Fangling ;
Lu, Haiping .
PATTERN RECOGNITION, 2019, 88 :729-738
[5]  
Balcan MF, 2017, PR MACH LEARN RES, V70
[6]  
Balog M., 2017, PMLR, V70, P371
[7]  
Blum A., 2005, P 24 ACM SIGMOD SIGA, P128
[8]  
Bowen C.M., 2016, ARXIV PREPRINT ARXIV
[9]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[10]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267