Mean Hilbert envelope coefficients (MHEC) for robust speaker and language identification

被引:53
|
作者
Sadjadi, Seyed Omid [1 ]
Hansen, John H. L. [1 ]
机构
[1] Univ Texas Dallas, Dept Elect Engn, Ctr Robust Speech Syst CRSS, Richardson, TX 75080 USA
基金
美国国家科学基金会;
关键词
Language identification; MHEC; Mismatch conditions; Robust features; Speaker identification; MULTITAPER MFCC; SPEECH; NOISE; CLASSIFICATION; VERIFICATION; MODULATIONS; RECOGNITION; FEATURES;
D O I
10.1016/j.specom.2015.04.005
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Adverse noisy conditions pose great challenges to automatic speech applications including speaker and language identification (SID and LID), where mel-frequency cepstral coefficients (MFCC) are the most commonly adopted acoustic features. Although systems trained using MFCCs provide competitive performance under matched conditions, it is well-known that such systems are susceptible to acoustic mismatch between training and test conditions due to noise and channel degradations. Motivated by this fact, this study proposes an alternative noise-robust acoustic feature front-end that is capable of capturing speaker identity as well as language structure/-content conveyed in the speech signal. Specifically, a feature extraction procedure inspired by the human auditory processing is proposed. The proposed feature is based on the Hilbert envelope of Gammatone filterbank outputs that represent the envelope of the auditory nerve response. The subband amplitude modulations, which are captured through smoothed Hilbert envelopes (a.k.a. temporal envelopes), carry useful acoustic information and have been shown to be robust to signal degradations. Effectiveness of the proposed front-end, which is entitled mean Hilbert envelope coefficients (MHEC), is evaluated in the context of SID and LID tasks using degraded speech material from the DARPA Robust Automatic Transcription of Speech (RATS) program. In addition, we investigate the impact of the dynamic range compression stage in the MHEC feature extraction process on performance using logarithmic and power-law non-linearities. Experimental results indicate that: (i) the MHEC feature is highly effective and performs favorably compared to other conventional and state-of-the-art front-ends, and (ii) the power-law non-linearity consistently yields the best performance across different conditions for both SID and LID tasks. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:138 / 148
页数:11
相关论文
共 50 条
  • [31] ROBUST SPEAKER IDENTIFICATION IN NOISY AND REVERBERANT CONDITIONS
    Zhao, Xiaojia
    Wang, Yuxuan
    Wang, DeLiang
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [32] Combining classifier decisions for robust speaker identification
    Mashao, DJ
    Skosan, M
    PATTERN RECOGNITION, 2006, 39 (01) : 147 - 155
  • [33] Experimental Evaluation of Features for Robust Speaker Identification
    Reynolds, Douglas A.
    IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 1994, 2 (04): : 639 - 643
  • [34] Robust speaker identification system for voice changes
    Martinez Mascorro, Guillermo Arturo
    Aguilar Torres, Gualberto
    INGENIUS-REVISTA DE CIENCIA Y TECNOLOGIA, 2012, (08): : 45 - 53
  • [35] Modulation Features for Noise Robust Speaker Identification
    Mitra, Vikramjit
    McLaren, Mitchel
    Franco, Horacio
    Graciarena, Martin
    Scheffer, Nicolas
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 3670 - 3674
  • [36] Robust Speaker Identification Based on Binaural Masks
    Ghalamiosgouei, Sina
    Geravanchizadeh, Masoud
    SPEECH COMMUNICATION, 2021, 132 (132) : 1 - 9
  • [37] Latent prosody analysis for robust speaker identification
    Liao, Yuan-Fu
    Chen, Zi-He
    Juang, Yau-Tarng
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2007, 15 (06): : 1870 - 1883
  • [38] BLIND REVERBERATION MITIGATION FOR ROBUST SPEAKER IDENTIFICATION
    Sadjadi, Seyed Omid
    Hansen, John H. L.
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 4225 - 4228
  • [39] Formant Based Linear Prediction Coefficients for Speaker Identification
    Srivastava, Sumit
    Nandi, Pratibha
    Sahoo, G.
    Chandra, Mahesh
    2014 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2014, : 685 - 688
  • [40] A late fusion deep neural network for robust speaker identification using raw waveforms and gammatone cepstral coefficients
    Salvati, Daniele
    Drioli, Carlo
    Foresti, Gian Luca
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 222