N-gram Language Models in JLASER Neural Network Speech Recognizer

被引:0
|
作者
Konopik, Miloslav [1 ]
Habernal, Ivan [1 ]
Brychcin, Tomas [1 ]
机构
[1] Univ W Bohemia, Dept Comp Sci & Engn, Plzen 30614, Czech Republic
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In our recent research we have discovered that neural networks can be more efficient in speech recognition than the state of the art approach based on Gaussian mixtures. This statement is valid only for small corpora, however, many applications do not require a huge recognition vocabulary. In this article we describe our speech recognizer - called JLASER - based on neural networks. We also show the effect of n-gram language models applied to the JLASER recognizer.
引用
收藏
页码:167 / 170
页数:4
相关论文
共 50 条
  • [21] LARGE MARGIN ESTIMATION OF N-GRAM LANGUAGE MODELS FOR SPEECH RECOGNITION VIA LINEAR PROGRAMMING
    Magdin, Vladimir
    Jiang, Hui
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 5398 - 5401
  • [22] N-gram language models for Polish language. Basic concepts and applications in automatic speech recognition systems
    Rapp, Bartosz
    2008 INTERNATIONAL MULTICONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY (IMCSIT), VOLS 1 AND 2, 2008, : 295 - 298
  • [23] Language modeling by string pattern N-gram for Japanese speech recognition
    Ito, A
    Kohda, M
    ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 490 - 493
  • [24] N-gram language models for offline handwritten text recognition
    Zimmermann, M
    Bunke, H
    NINTH INTERNATIONAL WORKSHOP ON FRONTIERS IN HANDWRITING RECOGNITION, PROCEEDINGS, 2004, : 203 - 208
  • [25] Variable-length category n-gram language models
    Univ of Cambridge, Cambridge, United Kingdom
    Comput Speech Lang, 1 (99-124):
  • [26] TOPIC N-GRAM COUNT LANGUAGE MODEL ADAPTATION FOR SPEECH RECOGNITION
    Haidar, Md. Akmal
    O'Shaughnessy, Douglas
    2012 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2012), 2012, : 165 - 169
  • [27] N-gram Counts and Language Models from the Common Crawl
    Buck, Christian
    Heafield, Kenneth
    van Ooyen, Bas
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3579 - 3584
  • [28] Character n-Gram Embeddings to Improve RNN Language Models
    Takase, Sho
    Suzuki, Jun
    Nagata, Masaaki
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5074 - 5082
  • [29] On the N-gram Approximation of Pre-trained Language Models
    Krishnan, Aravind
    Alabi, Jesujoba O.
    Klakow, Dietrich
    INTERSPEECH 2023, 2023, : 371 - 375
  • [30] Learning N-gram Language Models from Uncertain Data
    Kuznetsov, Vitaly
    Liao, Hank
    Mohri, Mehryar
    Riley, Michael
    Roark, Brian
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 2323 - 2327