Composition of Deep and Spiking Neural Networks for Very Low Bit Rate Speech Coding

被引:18
|
作者
Cernak, Milos [1 ]
Lazaridis, Alexandros [1 ]
Asaei, Afsaneh [1 ]
Garner, Philip N. [1 ]
机构
[1] Idiap Res Inst, Ctr Parc, CH-1920 Martigny, Switzerland
基金
瑞士国家科学基金会;
关键词
Very low bit rate speech coding; deep neural networks; spiking neural networks; continuous F0 coding; RECOGNITION; ATTRIBUTE;
D O I
10.1109/TASLP.2016.2604566
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Most current very low bit rate (VLBR) speech coding systems use hidden Markov model (HMM) based speech recognition and synthesis techniques. This allows transmission of information (such as phonemes) segment by segment; this decreases the bit rate. However, an encoder based on a phoneme speech recognition may create bursts of segmental errors; these would be further propagated to any suprasegmental (such as syllable) information coding. Together with the errors of voicing detection in pitch parametrization, HMM-based speech coding leads to speech discontinuities and unnatural speech sound artifacts. In this paper, we propose a novel VLBR speech coding framework based on neural networks (NNs) for end-to-end speech analysis and synthesis without HMMs. The speech coding framework relies on a phonological (subphonetic) representation of speech. It is designed as a composition of deep and spiking NNs: a bank of phonological analyzers at the transmitter, and a phonological synthesizer at the receiver. These are both realized as deep NNs, along with a spiking NNas an incremental and robust encoder of syllable boundaries for coding of continuous fundamental frequency (F0). A combination of phonological features defines much more sound patterns than phonetic features defined by HMM-based speech coders; this finer analysis/synthesis code contributes to smoother encoded speech. Listeners significantly prefer the NN-based approach due to fewer discontinuities and speech artifacts of the encoded speech. A single forward pass is required during the speech encoding and decoding. The proposed VLBR speech coding operates at a bit rate of approximately 360 bits/s.
引用
收藏
页码:2301 / 2312
页数:12
相关论文
共 50 条
  • [1] An application of recurrent neural networks to low bit rate speech coding
    Kohata, M
    ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 314 - 317
  • [2] An application of recurrent neural networks to low bit rate speech coding
    Kohata, M
    ISSPA 96 - FOURTH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, PROCEEDINGS, VOLS 1 AND 2, 1996, : 57 - 60
  • [3] Techniques of very low bit-rate speech coding
    Cui, HJ
    Tang, K
    Zhao, M
    Zhang, X
    CHINESE JOURNAL OF ELECTRONICS, 2004, 13 (01): : 63 - 65
  • [4] Very low bit rate speech coding based on HMMs
    Hiroi, J., 1600, John Wiley and Sons Inc. (32):
  • [5] Corpus based very low bit rate speech coding
    Baudoin, G
    El Chami, F
    2003 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS: SPEECH PROCESSING I, 2003, : 792 - 795
  • [6] Very low bit rate speech coding in tandem connections
    de Lamare, RC
    Alcaim, A
    ELECTRONICS LETTERS, 2003, 39 (18) : 1356 - 1357
  • [7] Speech coding at low and very low bit rates
    Baudoin, G
    Cernocky, J
    Gournay, P
    Chollet, G
    ANNALS OF TELECOMMUNICATIONS, 2000, 55 (9-10) : 462 - 482
  • [8] Speech coding at low and very low bit rates
    Baudoin, Geneviève
    Cernocky, Jan
    Gournay, Philippe
    Chollet, Gérard
    Annales des Telecommunications/Annals of Telecommunications, 2000, 55 (9-10): : 462 - 482
  • [9] Multisensor very low bit rate speech coding using segment quantization
    McCree, Alan
    Brady, Kevin
    Quatieri, Thomas F.
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 3997 - +
  • [10] Multi stage matrix quantization for very low bit rate speech coding
    Ozaydin, S
    Baykal, B
    2001 IEEE THIRD WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS, PROCEEDINGS, 2001, : 372 - 375