Is ATIS too shallow to go deeper for benchmarking Spoken Language Understanding models?

被引:7
|
作者
Bechet, Frederic [1 ]
Raymond, Christian [2 ]
机构
[1] Aix Marseille Univ, Univ Toulon, CNRS, LIS, Marseille, France
[2] INSA Rennes INRIA IRISA, Rennes, France
关键词
Spoken Language Understanding; ATIS; Deep Neural Network; Conditionnal Random Fields; NETWORKS;
D O I
10.21437/Interspeech.2018-2256
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ATIS (Air Travel Information Service) corpus will be soon celebrating its 30th birthday. Designed originally to benchmark spoken language systems, it still represents the most well-known corpus for benchmarking Spoken Language Understanding (SLU) systems. In 2010, in a paper titled "What is left to be understood in ATIS?" [1], Tur et al. discussed the relevance of this corpus after more than 10 years of research on statistical models for performing SLU tasks. Nowadays, in the Deep Neural Network (DNN) era, ATIS is still used as the main benchmark corpus for evaluating all kinds of DNN models, leading to further improvements, although rather limited, in SLU accuracy compared to previous state-of-the-art models. We propose in this paper to investigate these results obtained on ATIS from a qualitative point of view rather than just a quantitative point of view and answer the two following questions: what kind of qualitative improvement brought DNN models to SLU on the ATIS corpus? Is there anything left, from a qualitative point of view, in the remaining 5% of errors made by current state-of-the-art models?
引用
收藏
页码:3449 / 3453
页数:5
相关论文
共 28 条
  • [1] Benchmarking Transformers-based models on French Spoken Language Understanding tasks
    Cattan, Oralie
    Ghannay, Sahar
    Servan, Christophe
    Rosset, Sophie
    INTERSPEECH 2022, 2022, : 1238 - 1242
  • [2] Discriminative Models for Spoken Language Understanding
    Wang, Ye-Yi
    Acero, Alex
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 2426 - 2429
  • [3] Benchmarking benchmarks: introducing new automatic indicators for benchmarking Spoken Language Understanding corpora
    Bechet, Frederic
    Raymond, Christian
    INTERSPEECH 2019, 2019, : 4145 - 4149
  • [4] RNN TRANSDUCER MODELS FOR SPOKEN LANGUAGE UNDERSTANDING
    Thomas, Samuel
    Kuo, Hong-Kwang J.
    Saon, George
    Tuske, Zoltan
    Kingsbury, Brian
    Kurata, Gakuto
    Kons, Zvi
    Hoory, Ron
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7493 - 7497
  • [5] On the Evaluation of Speech Foundation Models for Spoken Language Understanding
    Arora, Siddhant
    Pasad, Ankita
    Chien, Chung-Ming
    Han, Jionghao
    Sharma, Roshan
    Jung, Jee-weon
    Dhamyal, Hira
    Chen, William
    Shona, Suwon
    Lee, Hung-yi
    Livescu, Karen
    Watanabe, Shinji
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 11923 - 11938
  • [6] JOINT GENERATIVE AND DISCRIMINATIVE MODELS FOR SPOKEN LANGUAGE UNDERSTANDING
    Dinarelli, Marco
    Moschitti, Alessandro
    Riccardi, Giuseppe
    2008 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY: SLT 2008, PROCEEDINGS, 2008, : 61 - 64
  • [7] Data Augmentation for Spoken Language Understanding via Pretrained Language Models
    Peng, Baolin
    Zhu, Chenguang
    Zeng, Michael
    Gao, Jianfeng
    INTERSPEECH 2021, 2021, : 1219 - 1223
  • [8] Improving Conversation-Context Language Models with Multiple Spoken Language Understanding Models
    Masumura, Ryo
    Tanaka, Tomohiro
    Ando, Atsushi
    Kamiyama, Hosana
    Oba, Takanobu
    Kobashikawa, Satoshi
    Aono, Yushi
    INTERSPEECH 2019, 2019, : 834 - 838
  • [9] ON-LINE ADAPTATION OF SEMANTIC MODELS FOR SPOKEN LANGUAGE UNDERSTANDING
    Bayer, Ali Orkan
    Riccardi, Giuseppe
    2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2013, : 90 - 95
  • [10] Can ChatGPT Detect Intent? Evaluating Large Language Models for Spoken Language Understanding
    He, Mutian
    Garner, Philip N.
    INTERSPEECH 2023, 2023, : 1109 - 1113