Exploiting classifier inter-level features for efficient out-of-distribution detection

被引:1
作者
Fayyad, Jamil [1 ]
Gupta, Kashish [2 ]
Mahdian, Navid [2 ]
Gruyer, Dominique [3 ]
Najjaran, Homayoun [2 ]
机构
[1] Univ British Columbia, Sch Engn, 3333 Univ Way, Kelowna, BC V1V 1V7, Canada
[2] Univ Victoria, Fac Engn & Comp Sci, 3800 Finnerty Rd, Victoria, BC V8P 5C2, Canada
[3] Univ Gustave Eiffel, PICS L COSYS, IFSTTAR, 25 Marronniers, F-78000 Champs Sur Marne, France
关键词
Out -of -distribution detection; Deep learning -based classification; Machine learning; Feature exploitation; Intermediate feature extraction;
D O I
10.1016/j.imavis.2023.104897
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning approaches have achieved state-of-the-art performance in a wide range of applications. Most often, however, it is falsely assumed that samples at inference follow a similar distribution as the training data. This assumption impairs models' ability to handle Out-of-Distribution (OOD) data during deployment. While several OOD detection approaches mostly focus on outputs of the last layer, we propose a novel mechanism that exploits features extracted from intermediate layers of a deep classifier. Specifically, we train an off-the-shelf auxiliary network using features of early layers to learn distinctive representations that improve OOD detection. The proposed network can be appended to any classification model without imposing any modification to its original architecture. Additionally, the mechanism does not require access to OOD data during training. We evaluate the performance of the mechanism on a variety of backbone architectures and datasets for near-OOD and far-OOD scenarios. The results demonstrate improvements in OOD detection compared to other state-of-the-art approaches. In particular, our proposed mechanism improves AUROC by 14.2% and 8.3% in comparison to the strong OOD baseline method, and by 3.2% and 3.9% in comparison to the second-best performing approach, on CIFAR-10 and CIFAR-100 datasets respectively.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Decomposing texture and semantic for out-of-distribution detection
    Moon, Jeong-Hyeon
    Ahn, Namhyuk
    Sohn, Kyung-Ah
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [32] Out-of-Distribution Detection for Reliable Face Recognition
    Yu, Chang
    Zhu, Xiangyu
    Lei, Zhen
    Li, Stan Z.
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 710 - 714
  • [33] Research on Image Out-of-Distribution Detection: A Review
    Guo L.
    Li G.
    Gong K.
    Xue Z.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2023, 36 (07): : 613 - 633
  • [34] Out-of-Distribution Detection with Virtual Outlier Smoothing
    Nie, Jun
    Luo, Yadan
    Ye, Shanshan
    Zhang, Yonggang
    Tian, Xinmei
    Fang, Zhen
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (02) : 724 - 741
  • [35] Rule-Based Out-of-Distribution Detection
    De Bernardi G.
    Narteni S.
    Cambiaso E.
    Mongelli M.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (06): : 2627 - 2637
  • [36] Full-Spectrum Out-of-Distribution Detection
    Yang, Jingkang
    Zhou, Kaiyang
    Liu, Ziwei
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (10) : 2607 - 2622
  • [37] DICE: Leveraging Sparsification for Out-of-Distribution Detection
    Sun, Yiyou
    Li, Yixuan
    COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 691 - 708
  • [38] Gradient-Regularized Out-of-Distribution Detection
    Sharifi, Sina
    Entesari, Taha
    Safaei, Bardia
    Patel, Vishal M.
    Fazlyab, Mahyar
    COMPUTER VISION - ECCV 2024, PT XIII, 2025, 15071 : 459 - 478
  • [39] Full-Spectrum Out-of-Distribution Detection
    Jingkang Yang
    Kaiyang Zhou
    Ziwei Liu
    International Journal of Computer Vision, 2023, 131 : 2607 - 2622
  • [40] Multi-label out-of-distribution detection via exploiting sparsity and co-occurrence of labels
    Wang, Lei
    Huang, Sheng
    Huangfu, Luwen
    Liu, Bo
    Zhang, Xiaohong
    IMAGE AND VISION COMPUTING, 2022, 126