Sharper Generalization Bounds for Pairwise Learning

被引:0
|
作者
Lei, Yunwen [1 ,2 ]
Ledent, Antoine [2 ]
Kloft, Marius [2 ]
机构
[1] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
[2] TU Kaiserslautern, Dept Comp Sci, D-67653 Kaiserslautern, Germany
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
基金
中国国家自然科学基金;
关键词
STABILITY; RANKING;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pairwise learning refers to learning tasks with loss functions depending on a pair of training examples, which includes ranking and metric learning as specific examples. Recently, there has been an increasing amount of attention on the generalization analysis of pairwise learning to understand its practical behavior. However, the existing stability analysis provides suboptimal high-probability generalization bounds. In this paper, we provide a refined stability analysis by developing generalization bounds which can be p n-times faster than the existing results, where n is the sample size. This implies excess risk bounds of the order O(n (up to a logarithmic factor) for both regularized risk minimization and stochastic gradient descent. We also introduce a new on-average stability measure to develop optimistic bounds in a low noise setting. We apply our results to ranking and metric learning, and clearly show the advantage of our generalization bounds over the existing analysis.
引用
收藏
页数:11
相关论文
共 50 条
  • [11] Learning Rates for Nonconvex Pairwise Learning
    Li, Shaojie
    Liu, Yong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9996 - 10011
  • [12] Algorithm-Dependent Generalization Bounds for Multi-Task Learning
    Liu, Tongliang
    Tao, Dacheng
    Song, Mingli
    Maybank, Stephen J.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (02) : 227 - 241
  • [13] Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces
    Lv, Shao-Gao
    NEURAL COMPUTATION, 2015, 27 (06) : 1294 - 1320
  • [14] Information-theoretic generalization bounds for black-box learning algorithms
    Harutyunyan, Hrayr
    Raginsky, Maxim
    Ver Steeg, Greg
    Galstyan, Aram
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [15] Information Complexity and Generalization Bounds
    Banerjee, Pradeep Kr
    Montufar, Guido
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 676 - 681
  • [16] Generalization Bounds for Uniformly Stable Algorithms
    Feldman, Vitaly
    Vondrak, Jan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [17] Regret Bounds for Online Pairwise Learning with Non-Convex Loss Functions Using Stability Analysis
    Lang X.
    Li C.
    Liu Y.
    Wang M.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (12): : 2806 - 2813
  • [18] Generalization error bounds using Wasserstein distances
    Lopez, Adrian Tovar
    Jog, Varun
    2018 IEEE INFORMATION THEORY WORKSHOP (ITW), 2018, : 205 - 209
  • [19] Generalization Bounds of Regularization Algorithm with Gaussian Kernels
    Cao, Feilong
    Liu, Yufang
    Zhang, Weiguo
    NEURAL PROCESSING LETTERS, 2014, 39 (02) : 179 - 194
  • [20] Generalization bounds for the area under the ROC curve
    Agarwal, S
    Graepel, T
    Herbrich, R
    Har-Peled, S
    Roth, D
    JOURNAL OF MACHINE LEARNING RESEARCH, 2005, 6 : 393 - 425