Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation

被引:16
作者
Besnier, Victor [1 ,3 ,4 ]
Bursuc, Andrei [2 ]
Picard, David [3 ]
Briot, Alexandre [1 ]
机构
[1] Valeo, Creteil, France
[2] Valco Ai, Paris, France
[3] Univ Gustave Eiffel, CNRS, Ecole Ponts, LIGM, Marne La Vallee, France
[4] CY Univ, ENSEA, CNRS, ETIS UMR8051, Cergy, France
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.01541
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we tackle the detection of out-of-distribution (OOD) objects in semantic segmentation. By analyzing the literature, we found that current methods are either accurate or fast but not both which limits their usability in real world applications. To get the best of both aspects, we propose to mitigate the common shortcomings by following four design principles: decoupling the OOD detection from the segmentation task, observing the entire segmentation network instead of just its output, generating training data for the OOD detector by leveraging blind spots in the segmentation network and focusing the generated data on localized regions in the image to simulate OOD objects. Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA). We validate the soundness of our approach across numerous ablation studies. We also show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
引用
收藏
页码:15681 / 15690
页数:10
相关论文
共 67 条
[1]  
Afkham Heydar Maboudi, 2008, ICPR
[2]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[3]  
[Anonymous], 2019, ICML
[4]  
[Anonymous], 2019, INT MICCAI BRAINL WO, DOI DOI 10.1007/978-3-030-11723-8_16
[5]  
[Anonymous], 2014, Journal of Machine Learning Research, DOI DOI 10.1016/J.MICROMESO.2003.09.025
[6]  
[Anonymous], 2018, NeurIPS
[7]  
[Anonymous], 2017, IEEE T PAMI
[8]  
[Anonymous], 2019, NeuRIPS
[9]  
Ayhan M S., 2018, Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks
[10]  
Besnier Victor, 2021, ARXIV210513688