MoEP-AE: Autoencoding Mixtures of Exponential Power Distributions for Open-Set Recognition

被引:13
作者
Sun, Jiayin [1 ,2 ,3 ]
Wang, Hong [4 ]
Dong, Qiulei [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[4] Univ Chinese Acad Sci, Coll Life Sci, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Task analysis; Training; Power distribution; Sun; Decoding; Gaussian distribution; Open-set recognition; autoencoder; scale mixture distribution; exponential power distribution;
D O I
10.1109/TCSVT.2022.3200112
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Open-set recognition aims to identify unknown classes while maintaining classification performance on known classes and has attracted increasing attention in the pattern recognition field. However, how to learn effective feature representations whose distributions are usually complex for classifying both known-class and unknown-class samples when only the known-class samples are available for training is an ongoing issue in open-set recognition. In contrast to methods implementing a single Gaussian, a mixture of Gaussians (MoG), or multiple MoGs, we propose a novel autoencoder that learns feature representations by modeling them as mixtures of exponential power distributions (MoEPs) in latent spaces called MoEP-AE. The proposed autoencoder considers that many real-world distributions are sub-Gaussian or super-Gaussian and can thus be represented by MoEPs rather than a single Gaussian or an MoG or multiple MoGs. We design a differentiable sampler that can sample from an MoEP to guarantee that the proposed autoencoder is trained effectively. Furthermore, we propose an MoEP-AE-based method for open-set recognition by introducing a discrimination strategy, where the MoEP-AE is used to model the distributions of the features extracted from the input known-class samples by minimizing a designed loss function at the training stage, called MoEP-AE-OSR. Extensive experimental results in both standard-dataset and cross-dataset settings demonstrate that the MoEP-AE-OSR method outperforms 14 existing open-set recognition methods in most cases in both open-set recognition and closed-set recognition tasks.
引用
收藏
页码:312 / 325
页数:14
相关论文
共 59 条
  • [1] Ahuja NA, 2019, Arxiv, DOI arXiv:1909.11786
  • [2] [Anonymous], 1923, Mathematicheskii Sbornik
  • [4] Towards Open Set Deep Networks
    Bendale, Abhijit
    Boult, Terrance E.
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1563 - 1572
  • [5] Cao A., 2021, P INT JOINT C ART IN, V35, P6877
  • [6] Robust Low-Rank Matrix Factorization Under General Mixture Noise Distributions
    Cao, Xiangyong
    Zhao, Qian
    Meng, Deyu
    Chen, Yang
    Xu, Zongben
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (10) : 4677 - 4690
  • [7] Adversarial Reciprocal Points Learning for Open Set Recognition
    Chen, Guangyao
    Peng, Peixi
    Wang, Xiangqian
    Tian, Yonghong
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8065 - 8081
  • [8] Cho, 2015, SPECIAL LECT IE, V2, P1, DOI DOI 10.1007/BF00758335
  • [9] The extended exponential power distribution and Bayesian robustness
    Choy, STB
    Walker, SG
    [J]. STATISTICS & PROBABILITY LETTERS, 2003, 65 (03) : 227 - 232
  • [10] Dilokthanakul N., 2017, arXiv