Privacy-Preserving Stochastic Gradual Learning

被引:4
|
作者
Han, Bo [1 ]
Tsang, Ivor W. [2 ]
Xiao, Xiaokui [3 ]
Chen, Ling [2 ]
Fung, Sai-Fu [4 ]
Yu, Celina P. [5 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
[2] Univ Technol Sydney, Ctr Artificial Intelligence, Ultimo, NSW 2007, Australia
[3] Natl Univ Singapore, Dept Comp Sci, Singapore 119077, Singapore
[4] City Univ Hong Kong, Dept Appl Social Sci, Kowloon Tong, Hong Kong, Peoples R China
[5] Global Business Coll Australia, Melbourne, Vic 3000, Australia
关键词
Privacy; Optimization; Differential privacy; Robustness; Stochastic processes; Task analysis; Stochastic optimization; differential privacy; robustness; MACHINE;
D O I
10.1109/TKDE.2020.2963977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is challenging for stochastic optimization to handle large-scale sensitive data safely. Duchi et al. recently proposed a private sampling strategy to solve privacy leakage in stochastic optimization. However, this strategy leads to a degeneration in robustness, since this strategy is equal to noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with gradual curriculum learning (CL). The noise injection leads to similar issue from label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields "private but robust" updates of the primal variable on the curriculum, that is, a reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.
引用
收藏
页码:3129 / 3140
页数:12
相关论文
共 50 条
  • [1] Stochastic privacy-preserving methods for nonconvex sparse learning
    Liang, Guannan
    Tong, Qianqian
    Ding, Jiahao
    Pan, Miao
    Bi, Jinbo
    INFORMATION SCIENCES, 2023, 630 : 567 - 585
  • [2] Privacy-Preserving Machine Learning [Cryptography]
    Kerschbaum, Florian
    Lukas, Nils
    IEEE SECURITY & PRIVACY, 2023, 21 (06) : 90 - 94
  • [3] Privacy-Preserving Federated Edge Learning: Modeling and Optimization
    Liu, Tianyu
    Di, Boya
    Song, Lingyang
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (07) : 1489 - 1493
  • [4] A Novel Approach for Differential Privacy-Preserving Federated Learning
    Elgabli, Anis
    Mesbah, Wessam
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 466 - 476
  • [5] Privacy-Preserving Incentive Mechanism Design for Federated Cloud-Edge Learning
    Liu, Tianyu
    Di, Boya
    An, Peng
    Song, Lingyang
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (03): : 2588 - 2600
  • [6] Towards robust and privacy-preserving federated learning in edge computing
    Zhou, Hongliang
    Zheng, Yifeng
    Jia, Xiaohua
    COMPUTER NETWORKS, 2024, 243
  • [7] Privacy-Preserving Cost-Sensitive Learning
    Yang, Yi
    Huang, Shuai
    Huang, Wei
    Chang, Xiangyu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (05) : 2105 - 2116
  • [8] A Framework for Privacy-Preserving in IoV Using Federated Learning With Differential Privacy
    Adnan, Muhammad
    Syed, Madiha Haider
    Anjum, Adeel
    Rehman, Semeen
    IEEE ACCESS, 2025, 13 : 13507 - 13521
  • [9] Survey on Privacy-Preserving Machine Learning
    Liu J.
    Meng X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (02): : 346 - 362
  • [10] Adaptive privacy-preserving federated learning
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Lu, Rongxing
    He, Miao
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) : 2356 - 2366