Distributed Online Convex Optimization With Statistical Privacy

被引:0
|
作者
Dai, Mingcheng [1 ]
Ho, Daniel W. C. [2 ]
Zhang, Baoyong [1 ]
Yuan, Deming [1 ]
Xu, Shengyuan [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Automat, Nanjing 210094, Jiangsu, Peoples R China
[2] City Univ Hong Kong, Dept Math, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Privacy; Cost function; Perturbation methods; Convex functions; Heuristic algorithms; Costs; Vectors; Upper bound; Protocols; Learning systems; Distributed (sub)gradient descent algorithm; online convex optimization (OCO); regret; statistical privacy; REGRET;
D O I
10.1109/TNNLS.2024.3492144
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in O(root K) for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in O(log(K)) is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback-Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm's expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Privacy Masking Stochastic Subgradient-Push Algorithm for Distributed Online Optimization
    Lu, Qingguo
    Liao, Xiaofeng
    Xiang, Tao
    Li, Huaqing
    Huang, Tingwen
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (06) : 3224 - 3237
  • [32] Control of Distributed Convex Optimization
    Lu, Jie
    Regier, Paul R.
    Tang, Choon Yik
    49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 489 - 495
  • [33] Weighted distributed differential privacy ERM Convex and non-convex
    Kang, Yilin
    Liu, Yong
    Niu, Ben
    Wang, Weiping
    COMPUTERS & SECURITY, 2021, 106
  • [34] Weighted distributed differential privacy ERM: Convex and non-convex
    Kang, Yilin
    Liu, Yong
    Wang, Weiping
    arXiv, 2019,
  • [35] Weighted distributed differential privacy ERM: Convex and non-convex
    Kang, Yilin
    Liu, Yong
    Niu, Ben
    Wang, Weiping
    Liu, Yong (liuyonggsai@ruc.edu.cn), 1600, Elsevier Ltd (106):
  • [36] Distributed Strongly Convex Optimization
    Tsianos, Konstantinos I.
    Rabbat, Michael G.
    2012 50TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2012, : 593 - 600
  • [37] Boosting for Online Convex Optimization
    Hazan, Elad
    Singh, Karan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [38] Predictive online convex optimization
    Lesage-Landry, Antoine
    Shames, Iman
    Taylor, Joshua A.
    AUTOMATICA, 2020, 113
  • [39] Conservative Online Convex Optimization
    de Luca, Martino Bernasconi
    Vittori, Edoardo
    Trovo, Francesco
    Restelli, Marcello
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 19 - 34
  • [40] An online convex optimization-based framework for convex bilevel optimization
    Shen, Lingqing
    Nam Ho-Nguyen
    Kilinc-Karzan, Fatma
    MATHEMATICAL PROGRAMMING, 2023, 198 (02) : 1519 - 1582