Learning Rates for Nonconvex Pairwise Learning

被引:2
作者
Li, Shaojie [1 ]
Liu, Yong [1 ]
机构
[1] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing 100872, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Convergence; Stability analysis; Measurement; Training; Statistics; Sociology; Optimization; Generalization performance; learning rates; nonconvex optimization; pairwise learning; EMPIRICAL RISK; ALGORITHMS; STABILITY; RANKING; MINIMIZATION;
D O I
10.1109/TPAMI.2023.3259324
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pairwise learning is receiving increasing attention since it covers many important machine learning tasks, e.g., metric learning, AUC maximization, and ranking. Investigating the generalization behavior of pairwise learning is thus of great significance. However, existing generalization analysis mainly focuses on the convex objective functions, leaving the nonconvex pairwise learning far less explored. Moreover, the current learning rates of pairwise learning are mostly of slower order. Motivated by these problems, we study the generalization performance of nonconvex pairwise learning and provide improved learning rates. Specifically, we develop different uniform convergence of gradients for pairwise learning under different assumptions, based on which we characterize empirical risk minimizer, gradient descent, and stochastic gradient descent. We first establish learning rates for these algorithms in a general nonconvex setting, where the analysis sheds insights on the trade-off between optimization and generalization and the role of early-stopping. We then derive faster learning rates of order O(1/n) for nonconvex pairwise learning with a gradient dominance curvature condition, where n is the sample size. Provided that the optimal population risk is small, we further improve the learning rates to O(1/n(2)), which, to the best of our knowledge, are the first O(1/n(2)) rates for pairwise learning.
引用
收藏
页码:9996 / 10011
页数:16
相关论文
共 93 条
[1]  
Agarwal S, 2009, J MACH LEARN RES, V10, P441
[2]  
Bach F., 2013, Advances in neural information processing systems, P773
[3]   STATISTICAL GUARANTEES FOR THE EM ALGORITHM: FROM POPULATION TO SAMPLE-BASED ANALYSIS [J].
Balakrishnan, Sivaraman ;
Wainwrightt, Martin J. ;
Yu, Bin .
ANNALS OF STATISTICS, 2017, 45 (01) :77-120
[4]  
Bartlett P. L., 2003, Journal of Machine Learning Research, V3, P463, DOI 10.1162/153244303321897690
[5]   Local Rademacher complexities [J].
Bartlett, PL ;
Bousquet, O ;
Mendelson, S .
ANNALS OF STATISTICS, 2005, 33 (04) :1497-1537
[6]   Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis [J].
Bian, Wei ;
Tao, Dacheng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (12) :2325-2337
[7]   Optimization Methods for Large-Scale Machine Learning [J].
Bottou, Leon ;
Curtis, Frank E. ;
Nocedal, Jorge .
SIAM REVIEW, 2018, 60 (02) :223-311
[8]   Stability and generalization [J].
Bousquet, O ;
Elisseeff, A .
JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (03) :499-526
[9]  
Bousquet Olivier, 2020, P MACHINE LEARNING R, V125
[10]   Generalization bounds for metric and similarity learning [J].
Cao, Qiong ;
Guo, Zheng-Chu ;
Ying, Yiming .
MACHINE LEARNING, 2016, 102 (01) :115-132