Regret and Cumulative Constraint Violation Analysis for Distributed Online Constrained Convex Optimization

被引:13
作者
Yi, Xinlei [1 ,2 ]
Li, Xiuxian [3 ,4 ]
Yang, Tao [5 ]
Xie, Lihua [6 ]
Chai, Tianyou [5 ]
Johansson, Karl Henrik [1 ,2 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Div Decis & Control Syst, S-10044 Stockholm, Sweden
[2] Digital Futures, S-10044 Stockholm, Sweden
[3] Tongji Univ, Coll Elect & Informat Engn, Dept Control Sci & Engn, Shanghai 200070, Peoples R China
[4] Tongji Univ, Shanghai Res Inst Intelligent Autonomous Syst, Shanghai 200070, Peoples R China
[5] Northeastern Univ, State Key Lab Synthet Automation Proc Ind, Shenyang 110819, Peoples R China
[6] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
瑞典研究理事会; 中国国家自然科学基金;
关键词
Convex functions; Measurement; Heuristic algorithms; Benchmark testing; Time measurement; Standards; Machine learning; Cumulative constraint violation; distributed optimization; online optimization; regret; time-varying constraints; ALGORITHM;
D O I
10.1109/TAC.2022.3230766
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article considers the distributed online convex optimization problem with time-varying constraints over a network of agents. This is a sequential decision making problem with two sequences of arbitrarily varying convex loss and constraint functions. At each round, each agent selects a decision from the decision set, and then only a portion of the loss function and a coordinate block of the constraint function at this round are privately revealed to this agent. The goal of the network is to minimize the network-wide loss accumulated over time. Two distributed online algorithms with full-information and bandit feedback are proposed. Both dynamic and static network regret bounds are analyzed for the proposed algorithms, and network cumulative constraint violation is used to measure constraint violation, which excludes the situation that strictly feasible constraints can compensate the effects of violated constraints. In particular, we show that the proposed algorithms achieve O(T-max{k, 1-k.}) static network regret and O (T1-k/2) network cumulative constraint violation, where T is the time horizon and.k epsilon (0, 1) is a user-defined tradeoff parameter. Moreover, if the loss functions are strongly convex, then the static network regret bound can be reduced to O(T-k). Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.
引用
收藏
页码:2875 / 2890
页数:16
相关论文
共 59 条
[31]   Distributed Subgradient Methods for Multi-Agent Optimization [J].
Nedic, Angelia ;
Ozdaglar, Asurrian .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (01) :48-61
[32]   Online Learning of Feasible Strategies in Unknown Environments [J].
Paternain, Santiago ;
Ribeiro, Alejandro .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (06) :2807-2822
[33]  
Raginsky M, 2011, P AMER CONTR CONF, P5363
[34]  
Sadeghi O, 2020, PR MACH LEARN RES, V108, P4410
[35]   Distributed Online Optimization in Dynamic Environments Using Mirror Descent [J].
Shahrampour, Shahin ;
Jadbabaie, Ali .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2018, 63 (03) :714-725
[36]   Online Learning and Online Convex Optimization [J].
Shalev-Shwartz, Shai .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2012, 4 (02) :107-194
[37]   Online Optimization Using Zeroth Order Oracles [J].
Shames, Iman ;
Selvaratnam, Daniel ;
Manton, Jonathan H. .
IEEE CONTROL SYSTEMS LETTERS, 2020, 4 (01) :31-36
[38]  
Shamir O, 2017, J MACH LEARN RES, V18
[39]  
Sharma Pranay, 2021, 2021 55th Asilomar Conference on Signals, Systems, and Computers, P1013, DOI 10.1109/IEEECONF53345.2021.9723285
[40]  
Sun W, 2017, PR MACH LEARN RES, V70