Exploring Data-Independent Dimensionality Reduction in Sparse Representation-Based Speaker Identification

被引:2
|
作者
Haris, B. C. [1 ]
Sinha, Rohit [1 ]
机构
[1] Indian Inst Technol, Dept Elect & Elect Engn, Gauhati 781039, India
关键词
Sparse representation classification; Random projections; Speaker recognition; Supervectors; Dimensionality reduction; VERIFICATION; RECOGNITION; ALGORITHM;
D O I
10.1007/s00034-014-9757-x
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The sparse representation classification (SRC) has attracted the attention of many signal processing domains in past few years. Recently, it has been successfully explored for the speaker recognition task with Gaussian mixture model (GMM) mean supervectors which are typically of the order of tens of thousands as speaker representations. As a result of this, the complexity of such systems become very high. With the use of the state-of-the-art i-vector representations, the dimension of GMM mean supervectors can be reduced effectively. But the i-vector approach involves a high dimensional data projection matrix which is learned using the factor analysis approach over huge amount of data from a large number of speakers. Also, the estimation of i-vector for a given utterance involves a computationally complex procedure. Motivated by these facts, we explore the use of data-independent projection approaches for reducing the dimensionality of GMM mean supervectors. The data-independent projection methods studied in this work include a normal random projection and two kinds of sparse random projections. The study is performed on SRC-based speaker identification using the NIST SRE 2005 dataset which includes channel matched and mismatched conditions. We find that the use of data-independent random projections for the dimensionality reduction of the supervectors results in only 3 % absolute loss in performance compared to that of the data-dependent (i-vector) approach. It is highlighted that with the use of highly sparse random projection matrices having 1 as non-zero coefficients, a significant reduction in computational complexity is achieved in finding the projections. Further, as these matrices do not require floating point representations, their storage requirement is also very small compared to that of the data-dependent or the normal random projection matrices. These reduced complexity sparse random projections would be of interest in context of the speaker recognition applications implemented on platforms having low computational power.
引用
收藏
页码:2521 / 2538
页数:18
相关论文
共 50 条
  • [21] A dimensionality reduction method based on structured sparse representation for face recognition
    Guanghua Gu
    Zhichao Hou
    Chunxia Chen
    Yao Zhao
    Artificial Intelligence Review, 2016, 46 : 431 - 443
  • [22] A dimensionality reduction method based on structured sparse representation for face recognition
    Gu, Guanghua
    Hou, Zhichao
    Chen, Chunxia
    Zhao, Yao
    ARTIFICIAL INTELLIGENCE REVIEW, 2016, 46 (04) : 431 - 443
  • [23] Simultaneous dimensionality reduction and dictionary learning for sparse representation based classification
    Yang, Bao-Qing
    Gu, Chao-Chen
    Wu, Kai-Jie
    Zhang, Tao
    Guan, Xin-Ping
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (06) : 8969 - 8990
  • [24] Research on Dimensionality Reduction based on Neighborhood Preserving Embedding and Sparse Representation
    Wu Di
    Zhao Zheng
    INFORMATION TECHNOLOGY FOR MANUFACTURING SYSTEMS II, PTS 1-3, 2011, 58-60 : 547 - 550
  • [25] Kernel Sparse Representation Based Dimensionality Reduction with Applications to Image Classification
    Zhang, Di
    He, Jiazhong
    Zhao, Yun
    ICIIP'18: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION PROCESSING, 2018, : 95 - 100
  • [27] Critical parameters of the sparse representation-based classifier
    Sonmez, Elena Battini
    Albayrak, Songul
    IET COMPUTER VISION, 2013, 7 (06) : 500 - 507
  • [28] Sparse representation-based hyperspectral image classification
    Hairong Wang
    Turgay Celik
    Signal, Image and Video Processing, 2018, 12 : 1009 - 1017
  • [29] DEEP MULTIMODAL SPARSE REPRESENTATION-BASED CLASSIFICATION
    Abavisani, Mahdi
    Patel, Vishal M.
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 773 - 777
  • [30] Sparse representation-based hyperspectral image classification
    Wang, Hairong
    Celik, Turgay
    SIGNAL IMAGE AND VIDEO PROCESSING, 2018, 12 (05) : 1009 - 1017