Understanding the Feature Norm for Out-of-Distribution Detection

被引:4
|
作者
Park, Jaewoo [1 ,2 ]
Chai, Jacky Chen Long [1 ]
Yoon, Jaeho [1 ]
Teoh, Andrew Beng Jin [1 ]
机构
[1] Yonsei Univ, Seoul, South Korea
[2] AiV Co, Houston, TX USA
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/ICCV51070.2023.00150
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A neural network trained on a classification dataset often exhibits a higher vector norm of hidden layer features for in-distribution ( ID) samples, while producing relatively lower norm values on unseen instances from outof-distribution (OOD). Despite this intriguing phenomenon being utilized in many applications, the underlying cause has not been thoroughly investigated. In this study, we demystify this very phenomenon by scrutinizing the discriminative structures concealed in the intermediate layers of a neural network. Our analysis leads to the following discoveries: ( 1) The feature norm is a confidence value of a classifier hidden in the network layer, specifically its maximum logit. Hence, the feature norm distinguishes OOD from ID in the same manner that a classifier confidence does. (2) The feature norm is class-agnostic, thus it can detect OOD samples across diverse discriminative models. (3) The conventional feature norm fails to capture the deactivation tendency of hidden layer neurons, which may lead to misidentification of ID samples as OOD instances. To resolve this drawback, we propose a novel negative-aware norm (NAN) that can capture both the activation and deactivation tendencies of hidden layer neurons. We conduct extensive experiments on NAN, demonstrating its efficacy and compatibility with existing OOD detectors, as well as its capability in label-free environments.
引用
收藏
页码:1557 / 1567
页数:11
相关论文
共 50 条
  • [11] Calculating Class-wise Weighted Feature Norm for Detecting Out-of-distribution Samples
    Yu, Yeonguk
    Shin, Sungho
    Lee, Kyoobin
    2023 20TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR, 2023, : 974 - 979
  • [12] On the Learnability of Out-of-distribution Detection
    Fang, Zhen
    Li, Yixuan
    Liu, Feng
    Han, Bo
    Lu, Jie
    Journal of Machine Learning Research, 2024, 25
  • [13] Entropic Out-of-Distribution Detection
    Macedo, David
    Ren, Tsang Ing
    Zanchettin, Cleber
    Oliveira, Adriano L., I
    Ludermir, Teresa
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [14] FLaNS: Feature-Label Negative Sampling for Out-of-Distribution Detection
    Lim, Chaejin
    Hyeon, Junhee
    Lee, Kiseong
    Han, Dongil
    IEEE ACCESS, 2025, 13 : 43878 - 43888
  • [15] RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
    Song, Yue
    Sebe, Nicu
    Wang, Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [16] Watermarking for Out-of-distribution Detection
    Wang, Qizhou
    Liu, Feng
    Zhang, Yonggang
    Zhang, Jing
    Gong, Chen
    Liu, Tongliang
    Han, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [17] Is Out-of-Distribution Detection Learnable?
    Fang, Zhen
    Li, Yixuan
    Lu, Jie
    Dong, Jiahua
    Han, Bo
    Liu, Feng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [18] On the Learnability of Out-of-distribution Detection
    Fang, Zhen
    Li, Yixuan
    Liu, Feng
    Han, Bo
    Lu, Jie
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [19] Out-of-Distribution Generalization With Causal Feature Separation
    Wang, Haotian
    Kuang, Kun
    Lan, Long
    Wang, Zige
    Huang, Wanrong
    Wu, Fei
    Yang, Wenjing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1758 - 1772
  • [20] Towards Boosting Out-of-Distribution Detection from a Spatial Feature Importance Perspective
    Zhu, Yao
    Yan, Xiu
    Xie, Chuanlong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025,