Scene Novelty Prediction from Unsupervised Discriminative Feature Learning

被引:0
|
作者
Ranjbar, Arian [1 ]
Yeh, Chun-Hsiao [2 ]
Hornauer, Sascha [1 ,2 ]
Yu, Stella X. [1 ,2 ]
Chan, Ching-Yao [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Int Comp Sci Inst, Berkeley, CA 94704 USA
关键词
D O I
10.1109/itsc45102.2020.9294451
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning approaches are widely explored in safety-critical autonomous driving systems on various tasks. Network models, trained on big data, map input to probable prediction results. However, it is unclear how to get a measure of confidence on this prediction at the test time. Our approach to gain this additional information is to estimate how similar test data is to the training data that the model was trained on. We map training instances onto a feature space that is the most discriminative among them. We then model the entire training set as a Gaussian distribution in that feature space. The novelty of the test data is characterized by its low probability of being in that distribution, or equivalently a large Mahalanobis distance in the feature space. Our distance metric in the discriminative feature space achieves a better novelty prediction performance than the state-of-the-art methods on most classes in CIFAR-10 and ImageNet. Using semantic segmentation as a proxy task often needed for autonomous driving, we show that our unsupervised novelty prediction correlates with the performance of a segmentation network trained on full pixel-wise annotations. These experimental results demonstrate potential applications of our method upon identifying scene familiarity and quantifying the confidence in autonomous driving actions.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Unsupervised Joint Feature Learning and Encoding for RGB-D Scene Labeling
    Wang, Anran
    Lu, Jiwen
    Cai, Jianfei
    Wang, Gang
    Cham, Tat-Jen
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) : 4459 - 4473
  • [42] Unsupervised Feature Learning for 3D Scene Reconstruction with Occupancy Maps
    Guizilini, Vitor
    Ramos, Fabio
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3827 - 3833
  • [43] Unsupervised Deep Feature Learning With Iteratively Refined Pseudo Classes for Scene Representation
    Gong, Zhiqiang
    Zhong, Ping
    Hu, Weidong
    IEEE ACCESS, 2019, 7 : 94779 - 94792
  • [44] Congested scene classification via efficient unsupervised feature learning and density estimation
    Yuan, Yuan
    Wan, Jia
    Wang, Qi
    PATTERN RECOGNITION, 2016, 56 : 159 - 169
  • [45] UNSUPERVISED FEATURE LEARNING FOR SCENE CLASSIFICATION OF HIGH RESOLUTION REMOTE SENSING IMAGE
    Fu, Min
    Yuan, Yuan
    Lu, Xiaoqiang
    2015 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING, 2015, : 206 - 210
  • [46] Discriminative Feature Learning Constrained Unsupervised Network for Cloud Detection in Remote Sensing Imagery
    Xie, Weiying
    Yang, Jian
    Li, Yunsong
    Lei, Jie
    Zhong, Jiaping
    Li, Jiaojiao
    REMOTE SENSING, 2020, 12 (03)
  • [47] Unsupervised Feature Selection for Microarray Gene Expression Data Based on Discriminative Structure Learning
    Ye, Xiucai
    Sakurai, Tetsuya
    JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2018, 24 (06) : 725 - 741
  • [48] Joint category-level and discriminative feature learning networks for unsupervised domain adaptation
    Zhang, Pengyu
    Huang, Junchu
    Zhou, Zhiheng
    Chen, Zengqun
    Shang, Junyuan
    Yang, Zhiwei
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (06) : 8499 - 8510
  • [49] Unsupervised Domain Adaptation via Weighted Sequential Discriminative Feature Learning for Sentiment Analysis
    Badr, Haidi
    Wanas, Nayer
    Fayek, Magda
    APPLIED SCIENCES-BASEL, 2024, 14 (01):
  • [50] Pseudo-Label Guided Structural Discriminative Subspace Learning for Unsupervised Feature Selection
    Wang, Zheng
    Yuan, Yongjin
    Wang, Rong
    Nie, Feiping
    Huang, Qinghua
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 18605 - 18619