Using boosting to improve a hybrid hmm/neural network speech recognizer

被引:26
|
作者
Schwenk, H [1 ]
机构
[1] CNRS, LIMSI, F-91403 Orsay, France
关键词
D O I
10.1109/ICASSP.1999.759874
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Boosting is a general method for improving the performance of almost any learning algorithm. A recently proposed and very promising boosting algorithm is AdaBoost [7]. In this paper we investigate if AdaBoost can be used to improve a hybrid HMM/neural network continuous speech recognizer. Boosting significantly improves the word error rate From 6.3% to 5.3% on a test set of the OGI Numbers95 corpus, a medium size continuous numbers recognition task. These results compare favorably with other combining techniques using several different feature representations or additional information from longer time spans. Ensemble methods or committees of learning machines can often improve the performance of a system in comparison to a single learning machine. A recently proposed and very promising boosting algorithm is AdaBoost [7]. It constructs a composite classifier by sequentially training classifiers while more and more emphasis on certain patterns. Several authors have reported important improvements with respect to one classifier on several machine learning benchmark problems of the UCI repository, e.g. [2, 6]. These experiments displayed rather intriguing generalization properties, such as continued decrease in generalization error after training error reaches zero. However, most of these data bases are very small (only several hundreds of training examples) and contain no significant amount of noise. There is also recent evidence that AdaBoost may very well overfit if we combine several hundred thousands classifiers [8] and [5] reports severe performance degradations of AdaBoost when adding 20% noise on the class-labels. In summary, we can say that the reasons for the impressive success of AdaBoost are still not completely understood. To the best of our knowledge, an application of AdaBoost to a real world problem has not yet been reported in the literature either. In this paper we investigate if AdaBoost can be applied to boost the performance of a continuous speech recognition system. In this domain we have to deal with large amounts of data (often more than 1 million training examples) and inherently noisy phoneme labels. The paper is organized as follows. In the next two sections we summarize the AdaBoost algorithm and our baseline speech recognizer. In the third section we shown how AdaBoost can be applied to this task and we report results on the Numbers95 corpus and compare them with other classifier combination techniques. The paper finishes with a conclusion and perspectives for future work.
引用
收藏
页码:1009 / 1012
页数:4
相关论文
共 50 条
  • [41] Speaker-adaptation ink hybrid HMM-MLP recognizer
    Neto, JP
    Martins, C
    Almeida, LB
    1996 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, CONFERENCE PROCEEDINGS, VOLS 1-6, 1996, : 3382 - 3385
  • [42] Design and Implementation of a Bayesian Network Speech Recognizer
    Wiggers, Pascal
    Rothkrantz, Leon J. M.
    van de Lisdonk, Rob
    TEXT, SPEECH AND DIALOGUE, 2010, 6231 : 447 - 454
  • [43] Context modeling in a hybrid HMM-neural net speech recognition system
    Franco, H
    Weintraub, M
    Cohen, M
    1997 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, 1997, : 2089 - 2092
  • [44] Improved Topic Classification and Keyword Discovery using an HMM-based Speech Recognizer Trained without Supervision
    Siu, Man-Hung
    Gish, Herbert
    Chan, Arthur
    Belfield, William
    11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 3 AND 4, 2010, : 2842 - 2845
  • [45] Fast speaker adaptation of large vocabulary continuous density HMM speech recognizer using a basis transform approach
    Boulis, C
    Digalakis, V
    2000 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, PROCEEDINGS, VOLS I-VI, 2000, : 989 - 992
  • [46] Speech Emotion Recognition with Hybrid Neural Network
    Wei, Chuanzheng
    Sun, Xiao
    Tian, Fang
    Ren, Fuji
    5TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING AND COMMUNICATIONS (BIGCOM 2019), 2019, : 298 - 302
  • [47] HEAR: An Hybrid Episodic-Abstract speech Recognizer
    Demange, Sebastien
    Van Compernolle, Dirk
    INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 3023 - 3026
  • [48] Using out-of-language data to improve an under-resourced speech recognizer
    Imseng, David
    Motlicek, Petr
    Bourlard, Herve
    Garner, Philip N.
    SPEECH COMMUNICATION, 2014, 56 : 142 - 151
  • [49] A hybrid HMM-neural network with gradient descent parameter training
    Salazar, J
    Robinson, M
    Azimi-Sadjadi, MR
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 1086 - 1091
  • [50] HEAR: An hybrid episodic-abstract speech recognizer
    Katholieke Universiteit Leuven, Dept. ESAT, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium
    Proc. Annu. Conf. Int. Speech. Commun. Assoc., INTERSPEECH, 1600, (3067-3070):