Exploring Data-Independent Dimensionality Reduction in Sparse Representation-Based Speaker Identification

被引:2
作者
Haris, B. C. [1 ]
Sinha, Rohit [1 ]
机构
[1] Indian Inst Technol, Dept Elect & Elect Engn, Gauhati 781039, India
关键词
Sparse representation classification; Random projections; Speaker recognition; Supervectors; Dimensionality reduction; VERIFICATION; RECOGNITION; ALGORITHM;
D O I
10.1007/s00034-014-9757-x
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The sparse representation classification (SRC) has attracted the attention of many signal processing domains in past few years. Recently, it has been successfully explored for the speaker recognition task with Gaussian mixture model (GMM) mean supervectors which are typically of the order of tens of thousands as speaker representations. As a result of this, the complexity of such systems become very high. With the use of the state-of-the-art i-vector representations, the dimension of GMM mean supervectors can be reduced effectively. But the i-vector approach involves a high dimensional data projection matrix which is learned using the factor analysis approach over huge amount of data from a large number of speakers. Also, the estimation of i-vector for a given utterance involves a computationally complex procedure. Motivated by these facts, we explore the use of data-independent projection approaches for reducing the dimensionality of GMM mean supervectors. The data-independent projection methods studied in this work include a normal random projection and two kinds of sparse random projections. The study is performed on SRC-based speaker identification using the NIST SRE 2005 dataset which includes channel matched and mismatched conditions. We find that the use of data-independent random projections for the dimensionality reduction of the supervectors results in only 3 % absolute loss in performance compared to that of the data-dependent (i-vector) approach. It is highlighted that with the use of highly sparse random projection matrices having 1 as non-zero coefficients, a significant reduction in computational complexity is achieved in finding the projections. Further, as these matrices do not require floating point representations, their storage requirement is also very small compared to that of the data-dependent or the normal random projection matrices. These reduced complexity sparse random projections would be of interest in context of the speaker recognition applications implemented on platforms having low computational power.
引用
收藏
页码:2521 / 2538
页数:18
相关论文
共 50 条
  • [41] Supervised Dimensionality Reduction of Hyperspectral Imagery Via Local and Global Sparse Representation
    Cao, Faxian
    Yang, Zhijing
    Hong, Xiaobin
    Cheng, Yongqiang
    Huang, Yuezhen
    Lv, Jujian
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 3860 - 3874
  • [42] Sparse Representation-Based Extreme Learning Machine for Motor Imagery EEG Classification
    She, Qingshan
    Chen, Kang
    Ma, Yuliang
    Thinh Nguyen
    Zhang, Yingchun
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2018, 2018
  • [43] A Sparse Representation-Based Binary Hypothesis Model for Target Detection in Hyperspectral Images
    Zhang, Yuxiang
    Du, Bo
    Zhang, Liangpei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (03): : 1346 - 1354
  • [44] Adaptive Sparse Representation-Based Minimum Entropy Deconvolution for Bearing Fault Detection
    Sun, Yuanhang
    Yu, Jianbo
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [45] Speaker identification using a novel combination of sparse representation and Gaussian mixture models
    Ma Yunjie
    AUTOMATIC CONTROL AND MECHATRONIC ENGINEERING III, 2014, 615 : 265 - 269
  • [46] Sparse representation-based joint angle and Doppler frequency estimation for MIMO radar
    Li, Jianfeng
    Zhang, Xiaofei
    MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, 2015, 26 (01) : 179 - 192
  • [47] Dimensionality Reduction for Hyperspectral Data Based on Pairwise Constraint Discriminative Analysis and Nonnegative Sparse Divergence
    Wang, Xuesong
    Kong, Yi
    Gao, Yang
    Cheng, Yuhu
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2017, 10 (04) : 1552 - 1562
  • [48] I-vector Extraction for Speaker Recognition Based on Dimensionality Reduction
    Ibrahim, Noor Salwani
    Ramli, Dzati Athiar
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KES-2018), 2018, 126 : 1534 - 1540
  • [49] Unlabeled data driven cost-sensitive inverse projection sparse representation-based classification with 1/2 regularization
    Yang, Xiaohui
    Wang, Zheng
    Sun, Jian
    Xu, Zongben
    SCIENCE CHINA-INFORMATION SCIENCES, 2022, 65 (08)
  • [50] Adaptive sparse graph learning based dimensionality reduction for classification
    Chen, Puhua
    Jiao, Licheng
    Liu, Fang
    Zhao, Zhiqiang
    Zhao, Jiaqi
    APPLIED SOFT COMPUTING, 2019, 82