Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

被引:258
作者
Hein, Matthias [1 ]
Andriushchenko, Maksym [2 ]
Bitterwolf, Julian [1 ]
机构
[1] Univ Tubingen, Tubingen, Germany
[2] Saarland Univ, Saarbrucken, Germany
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
CLASSIFICATION;
D O I
10.1109/CVPR.2019.00013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classifiers used in the wild, in particular for safety-critical systems, should not only have good generalization properties but also should know when they don't know, in particular make low confidence predictions far away from the training data. We show that ReLU type neural networks which yield a piecewise linear classifier function fail in this regard as they produce almost always high confidence predictions far away from the training data. For bounded domains like images we propose a new robust optimization technique similar to adversarial training which enforces low confidence predictions far away from the training data. We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.
引用
收藏
页码:41 / 50
页数:10
相关论文
共 39 条
  • [1] [Anonymous], ARXIV170205373V2
  • [2] [Anonymous], 2017, NIPS
  • [3] [Anonymous], 2017, ICLR
  • [4] [Anonymous], GCPR
  • [5] [Anonymous], 2018, ICML
  • [6] [Anonymous], 2016, CVPR
  • [7] [Anonymous], 2016, ICML
  • [8] [Anonymous], 2019, ARXIV190308778
  • [9] [Anonymous], ACM WORKSH ART INT S
  • [10] [Anonymous], ARXIV171108534V2