WOOD: Wasserstein-Based Out-of-Distribution Detection

被引:1
|
作者
Wang, Yinan [1 ]
Sun, Wenbo [2 ]
Jin, Jionghua [3 ]
Kong, Zhenyu [4 ]
Yue, Xiaowei [5 ]
机构
[1] Rensselaer Polytech Inst, Dept Ind & Syst Engn, Troy, NY 12180 USA
[2] Univ Michigan, Transportat Res Inst, Ann Arbor, MI 48109 USA
[3] Univ Michigan, Dept Ind & Operat Engn, Ann Arbor, MI 48109 USA
[4] Virginia Tech, Grad Dept Ind & Syst Engn, Blacksburg, VA 24060 USA
[5] Tsinghua Univ, Inst Qual & Reliabil, Dept Ind Engn, Beijing 100190, Peoples R China
关键词
Cyber security; image classification; machine learning; OOD detection; Wasserstein distance;
D O I
10.1109/TPAMI.2023.3328883
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The training and testing data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution. When part of the testing samples are drawn from a distribution that is sufficiently far away from that of the training samples (a.k.a. out-of-distribution (OOD) samples), the trained neural network has a tendency to make high-confidence predictions for these OOD samples. Detection of the OOD samples is critical when training a neural network used for image classification, object detection, etc. It can enhance the classifier's robustness to irrelevant inputs, and improve the system's resilience and security under different forms of attacks. Detection of OOD samples has three main challenges: (i) the proposed OOD detection method should be compatible with various architectures of classifiers (e.g., DenseNet, ResNet) without significantly increasing the model complexity and requirements on computational resources; (ii) the OOD samples may come from multiple distributions, whose class labels are commonly unavailable; (iii) a score function needs to be defined to effectively separate OOD samples from in-distribution (InD) samples. To overcome these challenges, we propose a Wasserstein-based out-of-distribution detection (WOOD) method. The basic idea is to define a Wasserstein-based score that evaluates the dissimilarity between a test sample and the distribution of InD samples. An optimization problem is then formulated and solved based on the proposed score function. The statistical learning bound of the proposed method is investigated to guarantee that the loss value achieved by the empirical optimizer approximates the global optimum. The comparison study results demonstrate that the proposed WOOD consistently outperforms other existing OOD detection methods.
引用
收藏
页码:944 / 956
页数:13
相关论文
共 50 条
  • [41] AdvSCOD: Bayesian-Based Out-Of-Distribution Detection via Curvature Sketching and Adversarial Sample Enrichment
    Qiao, Jiacheng
    Zhong, Chengzhi
    Zhu, Peican
    Tang, Keke
    MATHEMATICS, 2023, 11 (03)
  • [42] Out-of-Distribution (OOD) Detection and Generalization Improved by Augmenting Adversarial Mixup Samples
    Gwon, Kyungpil
    Yoo, Joonhyuk
    ELECTRONICS, 2023, 12 (06)
  • [43] Recognition Models for Distribution and Out-of-Distribution of Human Activities
    Staab, Sergio
    Krissel, Simon
    Luderschmidt, Johannes
    Martin, Ludger
    2022 18TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB), 2022,
  • [44] Multi-label Out-of-Distribution Detection with Spectral Normalized Joint Energy
    Mei, Yihan
    Wang, Xinyu
    Zhang, Dell
    Wang, Xiaoling
    WEB AND BIG DATA, APWEB-WAIM 2024, PT V, 2024, 14965 : 31 - 45
  • [45] Establishing the Foundation for Out-of-Distribution Detection in Monument Classification Through Nested Dichotomies
    Antequera-Sanchez, Ignacio
    Luis Suarez-Diaz, Juan
    Montes, Rosana
    Herrera, Francisco
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, PT II, HAIS 2024, 2025, 14858 : 165 - 176
  • [46] Exploiting classifier inter-level features for efficient out-of-distribution detection
    Fayyad, Jamil
    Gupta, Kashish
    Mahdian, Navid
    Gruyer, Dominique
    Najjaran, Homayoun
    IMAGE AND VISION COMPUTING, 2024, 142
  • [47] A Novel Data Augmentation Technique for Out-of-Distribution Sample Detection Using Compounded Corruptions
    Hebbalaguppe, Ramya
    Ghosal, Soumya Suvra
    Prakash, Jatin
    Khadilkar, Harshad
    Arora, Chetan
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT III, 2023, 13715 : 529 - 545
  • [48] Can you trust your Agent? The Effect of Out-of-Distribution Detection on the Safety of Reinforcement Learning Systems
    Haider, Tom
    Roscher, Karsten
    Herd, Benjamin
    Roza, Felippe Schmoeller
    Burton, Simon
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1569 - 1578
  • [49] Multi-label out-of-distribution detection via exploiting sparsity and co-occurrence of labels
    Wang, Lei
    Huang, Sheng
    Huangfu, Luwen
    Liu, Bo
    Zhang, Xiaohong
    IMAGE AND VISION COMPUTING, 2022, 126
  • [50] How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?
    Yifei Ming
    Yixuan Li
    International Journal of Computer Vision, 2024, 132 : 596 - 609