Revisiting Discriminative vs. Generative Classifiers: Theory and Implications

被引:0
作者
Zheng, Chenyu [1 ,2 ]
Wu, Guoqiang [3 ]
Bao, Fan [4 ]
Cao, Yue [5 ]
Li, Chongxuan [1 ,2 ]
Zhu, Jun [4 ]
机构
[1] Renmin Univ China, Gaoling Sch AI, Beijing, Peoples R China
[2] Beijing Key Lab Big Data Management & Anal Method, Beijing, Peoples R China
[3] Shandong Univ, Sch Software, Jinan, Peoples R China
[4] Tsinghua Univ, BNRist Ctr, Tsinghua Huawei Joint Ctr AI, THBI Lab,Dept Comp Sci & Tech,Inst AI, Beijing, Peoples R China
[5] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202 | 2023年 / 202卷
关键词
LOGISTIC-REGRESSION; CLASSIFICATION; CONSISTENCY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A large-scale deep model pre-trained on massive labeled or unlabeled data transfers well to downstream tasks. Linear evaluation freezes parameters in the pre-trained model and trains a linear classifier separately, which is efficient and attractive for transfer. However, little work has investigated the classifier in linear evaluation except for the default logistic regression. Inspired by the statistical efficiency of naive Bayes, the paper revisits the classical topic on discriminative vs. generative classifiers (Ng & Jordan, 2001). Theoretically, the paper considers the surrogate loss instead of the zero-one loss in analyses and generalizes the classical results from binary cases to multiclass ones. We show that, under mild assumptions, multiclass naive Bayes requires O(log n) samples to approach its asymptotic error while the corresponding multiclass logistic regression requires O ( n) samples, where n is the feature dimension. To establish it, we present a multiclass H-consistency bound framework and an explicit bound for logistic loss, which are of independent interests. Simulation results on a mixture of Gaussian validate our theoretical findings. Experiments on various pre-trained deep vision models show that naive Bayes consistently converges faster as the number of data increases. Besides, naive Bayes shows promise in few-shot cases and we observe the "two regimes" phenomenon in pre-trained supervised models. Our code is available at https://github.com/ML-GSAI/Revisiting-Dis-vs-Gen-Classifiers.
引用
收藏
页数:58
相关论文
共 43 条
[1]  
Arora S, 2019, PR MACH LEARN RES, V97
[2]  
Awasthi P., 2022, Advances in Neural Information Processing Systems
[3]  
Awasthi Pranjal, 2022, P MACHINE LEARNING R
[4]  
Bartlett P. L., 2002, Computational Learning Theory. 15th Annual Conference on Computational Learning Theory, COLT 2002. Proceedings (Lecture Notes in Artificial Intelligence Vol.2375), P44
[5]   Convexity, classification, and risk bounds [J].
Bartlett, PL ;
Jordan, MI ;
McAuliffe, JD .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2006, 101 (473) :138-156
[6]  
Boyd SP., 2004, Convex Optimization, DOI [10.1017/CBO9780511804441, DOI 10.1017/CBO9780511804441]
[7]  
Brown T. B., 2020, Advances in Neural Information Processing Systems
[8]  
Chen M, 2020, PR MACH LEARN RES, V119
[9]  
Chen T., Advances in Neural Information Processing Systems (NeurIPS), V2020c
[10]  
Chen T, 2020, PR MACH LEARN RES, V119