Optimizing Partial Area Under the Top-k Curve: Theory and Practice

被引:3
作者
Wang, Zitai [1 ,2 ]
Xu, Qianqian [3 ]
Yang, Zhiyong [4 ]
He, Yuan [5 ]
Cao, Xiaochun [6 ]
Huang, Qingming [3 ,7 ,8 ,9 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
[5] Alibaba Grp, Secur Dept, Hangzhou, Peoples R China
[6] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen 311121, Peoples R China
[7] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China
[8] Univ Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 101408, Peoples R China
[9] Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Measurement; Semantics; Benchmark testing; Training; Loss measurement; Fasteners; Upper bound; Machine learning; label ambiguity; Top-k error; AUTKC optimization; LABEL RANKING; AUC OPTIMIZATION; BOUNDS; CLASSIFICATION; ASSOCIATION; MULTICLASS; ALGORITHM; MODELS; RULES;
D O I
10.1109/TPAMI.2022.3199970
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Top-$k$k error has become a popular metric for large-scale classification benchmarks due to the inevitable semantic ambiguity among classes. Existing literature on top-$k$k optimization generally focuses on the optimization method of the top-$k$k objective, while ignoring the limitations of the metric itself. In this paper, we point out that the top-$k$k objective lacks enough discrimination such that the induced predictions may give a totally irrelevant label a top rank. To fix this issue, we develop a novel metric named partial Area Under the top-$k$k Curve (AUTKC). Theoretical analysis shows that AUTKC has a better discrimination ability, and its Bayes optimal score function could give a correct top-$K$K ranking with respect to the conditional probability. This shows that AUTKC does not allow irrelevant labels to appear in the top list. Furthermore, we present an empirical surrogate risk minimization framework to optimize the proposed metric. Theoretically, we present (1) a sufficient condition for Fisher consistency of the Bayes optimal score function; (2) a generalization upper bound which is insensitive to the number of classes under a simple hyperparameter setting. Finally, the experimental results on four benchmark datasets validate the effectiveness of our proposed framework.
引用
收藏
页码:5053 / 5069
页数:17
相关论文
共 85 条
[1]  
Agarwal S, 2005, J MACH LEARN RES, V6, P393
[2]  
Agarwal S, 2014, J MACH LEARN RES, V15, P1653
[3]  
Agarwal Shivani, 2011, The Infinite Push: A new Support Vector Ranking Algorithm that Directly Optimizes Accuracy at the Absolute Top of the List, DOI [DOI 10.1137/1.9781611972818.72, 10.1137/1.9781611972818.72]
[4]  
Aiguzhinov A, 2010, LECT NOTES ARTIF INT, V6332, P16, DOI 10.1007/978-3-642-16184-1_2
[5]   Tackling the the supervised label ranking problem by bagging weak learners [J].
Aledo, Juan A. ;
Gamez, Jose A. ;
Molina, David .
INFORMATION FUSION, 2017, 35 :38-50
[6]  
[Anonymous], 2009, P 26 ANN INT C MACH
[7]  
[Anonymous], 2012, Advances in Neural Information Processing Systems (NIPS)
[8]  
[Anonymous], 2009, P 26 ANN INT C MACH
[9]  
Berrada L, 2018, 6 INT C LEARN REPR I
[10]  
Calders T, 2007, LECT NOTES ARTIF INT, V4702, P42