OpenAUC: Towards AUC-Oriented Open-Set Recognition

被引:0
作者
Wang, Zitai [1 ,2 ]
Xu, Qianqian [3 ]
Yang, Zhiyong [4 ]
He, Yuan [5 ]
Cao, Xiaochun [1 ,6 ]
Huang, Qingming [3 ,4 ,7 ,8 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, SKLOIS, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
[3] Chinese Acad Sci, Inst Comp Tech, Key Lab Intelligent Informat Proc, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Comp Sci & Tech, Beijing, Peoples R China
[5] Alibaba Grp, Hangzhou, Peoples R China
[6] Sun Yat Sen Univ, Sch Cyber Sci & Tech, Shenzhen Campus, Shenzhen, Peoples R China
[7] Univ Chinese Acad Sci, BDKM, Beijing, Peoples R China
[8] Peng Cheng Lab, Shenzhen, Guangdong, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
基金
国家重点研发计划; 中国博士后科学基金; 中国国家自然科学基金;
关键词
PERFORMANCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional machine learning follows a close-set assumption that the training and test set share the same label space. While in many practical scenarios, it is inevitable that some test samples belong to unknown classes (open-set). To fix this issue, Open-Set Recognition (OSR), whose goal is to make correct predictions on both close-set samples and open-set samples, has attracted rising attention. In this direction, the vast majority of literature focuses on the pattern of open-set samples. However, how to evaluate model performance in this challenging task is still unsolved. In this paper, a systematic analysis reveals that most existing metrics are essentially inconsistent with the aforementioned goal of OSR: (1) For metrics extended from close-set classification, such as Open-set F-score, Youden's index, and Normalized Accuracy, a poor open-set prediction can escape from a low performance score with a superior close-set prediction. (2) Novelty detection AUC, which measures the ranking performance between close-set and open-set samples, ignores the close-set performance. To fix these issues, we propose a novel metric named OpenAUC. Compared with existing metrics, OpenAUC enjoys a concise pairwise formulation that evaluates open-set performance and close-set performance in a coupling manner. Further analysis shows that OpenAUC is free from the aforementioned inconsistency properties. Finally, an end-to-end learning method is proposed to minimize the OpenAUC risk, and the experimental results on popular benchmark datasets speak to its effectiveness.
引用
收藏
页数:13
相关论文
empty
未找到相关数据