A primal-dual perspective of online learning algorithms

被引:82
作者
Shalev-Shwartz, Shai [1 ]
Singer, Yoram [1 ]
机构
[1] Hebrew Univ Jerusalem, Sch Comp Sci & Engn, IL-91904 Jerusalem, Israel
关键词
online learning; mistake bounds; duality; regret bounds;
D O I
10.1007/s10994-007-5014-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We describe a novel framework for the design and analysis of online learning algorithms based on the notion of duality in constrained optimization. We cast a sub-family of universal online bounds as an optimization problem. Using the weak duality theorem we reduce the process of online learning to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress for analyzing online learning algorithms. We are thus able to tie the primal objective value and the number of prediction mistakes using the increase in the dual.
引用
收藏
页码:115 / 142
页数:28
相关论文
共 23 条
[1]  
[Anonymous], S MATH THEORY AUTOMA
[2]   Relative loss bounds for on-line density estimation with the exponential family of distributions [J].
Azoury, KS ;
Warmuth, MK .
MACHINE LEARNING, 2001, 43 (03) :211-246
[3]  
Boyd S., 2004, CONVEX OPTIMIZATION
[4]  
Bregman LM, 1967, USSR Computational Mathematics and Mathematical Physics, V7, P200
[5]  
Censor Y., 1997, PARALLEL OPTIMIZATIO
[6]   Second-order perceptron algorithm [J].
Cesa-Bianchi, N ;
Conconi, A ;
Gentile, C .
SIAM JOURNAL ON COMPUTING, 2005, 34 (03) :640-668
[7]  
Cesa-Bianchi N, 2002, ADV NEUR IN, V14, P359
[8]  
CRAMMER K, 2005, ONLINE PASSIVE AGGRE
[9]  
Dekel O., 2005, Advances in Neural Information Processing Systems, V18
[10]  
GENTILE C, 2002, MACHINE LEARNING, V53