Profit-Maximizing Model Marketplace with Differentially Private Federated Learning

被引:23
作者
Sun, Peng [1 ,2 ]
Chen, Xu [3 ]
Liao, Guocheng [4 ]
Huang, Jianwei [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Sch Sci & Engn, Shenzhen, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
[3] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[4] Sun Yat Sen Univ, Sch Software Engn, Zhuhai, Peoples R China
来源
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2022) | 2022年
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
ML model marketplace; federated learning; differential privacy; incentive mechanism;
D O I
10.1109/INFOCOM48880.2022.9796833
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing machine learning (ML) model marketplaces generally require data owners to share their raw data, leading to serious privacy concerns. Federated learning (FL) can partially alleviate this issue by enabling model training without raw data exchange. However, data owners are still susceptible to privacy leakage from gradient exposure in FL, which discourages their participation. In this work, we advocate a novel differentially private FL (DPFL)-based ML model marketplace. We focus on the broker-centric design. Specifically, the broker first incentivizes data owners to participate in model training via DPFL by offering privacy protection as per their privacy budgets and explicitly accounting for their privacy costs. Then, it conducts optimal model versioning and pricing to sell the obtained model versions to model buyers. In particular, we focus on the broker's profit maximization, which is challenging due to the significant difficulties in the revenue characterization of model trading and the cost estimation of DPFL model training. We propose a two-layer optimization framework to address it, i.e., revenue maximization and cost minimization under model quality constraints. The latter is still challenging due to its non-convexity and integer constraints. We hence propose efficient algorithms, and their performances are both theoretically guaranteed and empirically validated.
引用
收藏
页码:1439 / 1448
页数:10
相关论文
共 50 条
  • [31] Concentrated Differentially Private Federated Learning With Performance Analysis
    Hu, Rui
    Guo, Yuanxiong
    Gong, Yanmin
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2021, 2 : 276 - 289
  • [32] Distributionally Robust Federated Learning for Differentially Private Data
    Shi, Siping
    Hu, Chuang
    Wang, Dan
    Zhu, Yifei
    Han, Zhu
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 842 - 852
  • [33] Differentially Private Federated Learning with Heterogeneous Group Privacy
    Jiang, Mingna
    Wei, Linna
    Cai, Guoyue
    Wu, Xuangou
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 143 - 150
  • [34] Local differentially private federated learning with homomorphic encryption
    Zhao, Jianzhe
    Huang, Chenxi
    Wang, Wenji
    Xie, Rulin
    Dong, Rongrong
    Matwin, Stan
    JOURNAL OF SUPERCOMPUTING, 2023, 79 (17) : 19365 - 19395
  • [35] Local differentially private federated learning with homomorphic encryption
    Jianzhe Zhao
    Chenxi Huang
    Wenji Wang
    Rulin Xie
    Rongrong Dong
    Stan Matwin
    The Journal of Supercomputing, 2023, 79 : 19365 - 19395
  • [36] A Personalized and Differentially Private Federated Learning for Anomaly Detection of Industrial Equipment
    Zhang, Zhen
    Zhang, Weishan
    Bao, Zhicheng
    Miao, Yifan
    Liu, Yuru
    Zhao, Yikang
    Zhang, Rui
    Zhu, Wenyin
    IEEE JOURNAL OF RADIO FREQUENCY IDENTIFICATION, 2024, 8 : 468 - 475
  • [37] FL-PATE: Differentially Private Federated Learning with Knowledge Transfer
    Pan, Yanghe
    Ni, Jianbing
    Su, Zhou
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [38] Differentially Private Federated Learning for Multitask Objective Recognition
    Xie, Renyou
    Li, Chaojie
    Zhou, Xiaojun
    Chen, Hongyang
    Dong, Zhaoyang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (05) : 7269 - 7281
  • [39] Adaptive compressed learning boosts both efficiency and utility of differentially private federated learning
    Li, Min
    Xiao, Di
    Chen, Lvjun
    SIGNAL PROCESSING, 2025, 227
  • [40] Game Analysis and Incentive Mechanism Design for Differentially Private Cross-Silo Federated Learning
    Mao, Wuxing
    Ma, Qian
    Liao, Guocheng
    Chen, Xu
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9337 - 9351