On the Time-Varying Distributions of Online Stochastic Optimization

被引:0
|
作者
Cao, Xuanyu [1 ]
Zhang, Junshan [2 ]
Poor, H. Vincent [1 ]
机构
[1] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
[2] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ USA
关键词
Stochastic optimization; online optimization; online learning; time-varying distributions; dynamic benchmark; SAMPLE AVERAGE APPROXIMATION; CONVEX-OPTIMIZATION; ALGORITHMS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper studies online stochastic optimization where the random parameters follow time-varying distributions. In each time slot, after a control variable is determined, a sample drawn from the current distribution is revealed as feedback information. This form of stochastic optimization has broad applications in online learning and signal processing, where the underlying ground-truth is inherently time-varying, e.g., tracking a moving target. Dynamic optimal points are adopted as the performance benchmark to define the regret, as opposed to the static optimal point used in stochastic optimization with fixed distributions. Stochastic optimization with time-varying distributions is examined and a projected stochastic gradient descent algorithm is presented. An upper bound on its regret is established with respect to the drift of the dynamic optima, which measures the variations of the optimal solutions due to the varying distributions. In particular, the algorithm possesses sublinear regret as long as the drift of the optima is sublinear, i.e., the distributions do not vary too drastically. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm and the derived analytical results.
引用
收藏
页码:1494 / 1500
页数:7
相关论文
共 50 条