Incentivizing Differentially Private Federated Learning: A Multidimensional Contract Approach

被引:105
作者
Wu, Maoqiang [1 ]
Ye, Dongdong [1 ]
Ding, Jiahao [2 ]
Guo, Yuanxiong [3 ]
Yu, Rong [1 ]
Pan, Miao [2 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[2] Univ Houston, Elect & Comp Engn Dept, Houston, TX 77004 USA
[3] Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Data models; Computational modeling; Collaborative work; Data privacy; Internet of Things; Contracts; Task analysis; Differential privacy; federated learning; multidimensional contract; incentive mechanism;
D O I
10.1109/JIOT.2021.3050163
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a promising tool in the Internet-of-Things (IoT) domain for training a machine learning model in a decentralized manner. Specifically, the data owners (e.g., IoT device consumers) keep their raw data and only share their local computation results to train the global model of the model owner (e.g., an IoT service provider). When executing the federated learning task, the data owners contribute their computation and communication resources. In this situation, the data owners have to face privacy issues where attackers may infer data property or recover the raw data based on the shared information. Considering these disadvantages, the data owners will be reluctant to use their data to participate in federated learning without a well-designed incentive mechanism. In this article, we deliberately design an incentive mechanism jointly considering the task expenditure and privacy issue of federated learning. Based on a differentially private federated learning (DPFL) framework that can prevent the privacy leakage of the data owners, we model the contribution as well as the computation, communication, and privacy costs of each data owner. The three types of costs are data owners' private information unknown to the model owner, which thus forms an information asymmetry. To maximize the utility of the model owner under such information asymmetry, we leverage a 3-D contract approach to design the incentive mechanism. The simulation results validate the effectiveness of the proposed incentive mechanism with the DPFL framework compared to other baseline mechanisms.
引用
收藏
页码:10639 / 10651
页数:13
相关论文
共 36 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Bi J., P 29 ACM INT C INF K, P285
[3]  
Boyd S., 2014, Convex Optim
[4]   A Differential-Private Framework for Urban Traffic Flows Estimation via Taxi Companies [J].
Cai, Zhipeng ;
Zheng, Xu ;
Yu, Jiguo .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (12) :6492-6499
[5]  
Ding JH, 2020, AAAI CONF ARTIF INTE, V34, P622
[6]   Differential Privacy Preserving of Training Model in Wireless Big Data with Edge Computing [J].
Du, Miao ;
Wang, Kun ;
Xia, Zhuoqun ;
Zhang, Yan .
IEEE TRANSACTIONS ON BIG DATA, 2020, 6 (02) :283-295
[7]   Big Data Privacy Preserving in Multi-Access Edge Computing for Heterogeneous Internet of Things [J].
Du, Miao ;
Wang, Kun ;
Chen, Yuanfang ;
Wang, Xiaoyan ;
Sun, Yanfei .
IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (08) :62-67
[8]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618
[9]  
Hu R., 2020, TRADING DATA LEARNIN
[10]  
Hu R., 2020, CONCENTRATED DIFFERE, P13761