Meta Auxiliary Learning for Low-resource Spoken Language Understanding

被引:1
|
作者
Gao, Yingying [1 ]
Feng, Junlan [1 ]
Deng, Chao [1 ]
Zhang, Shilei [1 ]
机构
[1] China Mobile Res Inst, Beijing, Peoples R China
来源
关键词
spoken language understanding; meta learning; auxiliary learning; SPEECH;
D O I
10.21437/Interspeech.2022-916
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Spoken language understanding (SLU) treats automatic speech recognition (ASR) and natural language understanding (NLU) as a unified task and usually suffers from data scarcity. We exploit an ASR and NLU joint training method based on meta auxiliary learning to improve the performance of low-resource SLU task by only taking advantage of abundant manual transcriptions of speech data. One obvious advantage of such method is that it provides a flexible framework to implement a low-resource SLU training task without requiring access to any further semantic annotations. In particular, a NLU model is taken as label generation network to predict intent and slot tags from texts; a multi-task network trains ASR task and SLU task synchronously from speech; and the predictions of label generation network are delivered to the multi-task network as semantic targets. The efficiency of the proposed algorithm is demonstrated with experiments on the public CATSLU dataset, which produces more suitable ASR hypotheses for the downstream NLU task.
引用
收藏
页码:2703 / 2707
页数:5
相关论文
共 50 条
  • [1] Bidirectional Representations for Low-Resource Spoken Language Understanding
    Meeus, Quentin
    Moens, Marie-Francine
    Van Hamme, Hugo
    APPLIED SCIENCES-BASEL, 2023, 13 (20):
  • [2] Large-scale Transfer Learning for Low-resource Spoken Language Understanding
    Jia, Xueli
    Wang, Jianzong
    Zhang, Zhiyong
    Cheng, Ning
    Xiao, Jing
    INTERSPEECH 2020, 2020, : 1555 - 1559
  • [3] Bottleneck Low-rank Transformers for Low-resource Spoken Language Understanding
    Wang, Pu
    Van Hamme, Hugo
    INTERSPEECH 2022, 2022, : 1248 - 1252
  • [4] MULTITASK LEARNING FOR LOW RESOURCE SPOKEN LANGUAGE UNDERSTANDING
    Meeus, Quentin
    Moens, Marie Francine
    Van Hamme, Hugo
    INTERSPEECH 2022, 2022, : 4073 - 4077
  • [5] Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks
    Dou, Zi-Yi
    Yu, Keyi
    Anastasopoulos, Antonios
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1192 - 1197
  • [6] Developing an AI-Assisted Low-Resource Spoken Language Learning App for Children
    Getman, Yaroslav
    Phan, Nhan
    Al-Ghezi, Ragheb
    Voskoboinik, Ekaterina
    Singh, Mittul
    Grosz, Tamas
    Kurimo, Mikko
    Salvi, Giampiero
    Svendsen, Torbjorn
    Strombergsson, Sofia
    Smolander, Anna
    Ylinen, Sari
    IEEE ACCESS, 2023, 11 : 86025 - 86037
  • [7] Capsule Networks for Low Resource Spoken Language Understanding
    Renkens, Vincent
    Van Hamme, Hugo
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 601 - 605
  • [8] Meta Learning for Low-Resource Molecular Optimization
    Wang, Jiahao
    Zheng, Shuangjia
    Chen, Jianwen
    Yang, Yuedong
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2021, 61 (04) : 1627 - 1636
  • [9] Variational model for low-resource natural language generation in spoken dialogue systems
    Tran, Van-Khanh
    Nguyen, Le-Minh
    Computer Speech and Language, 2021, 65
  • [10] Towards Cross-Corpora Generalization for Low-Resource Spoken Language Identification
    Dey, Spandan
    Sahidullah, Md
    Saha, Goutam
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 5040 - 5050