Hierarchical and Bidirectional Joint Multi-Task Classifiers for Natural Language Understanding

被引:0
|
作者
Ji, Xiaoyu [1 ,2 ]
Hu, Wanyang [3 ]
Liang, Yanyan [1 ,4 ]
机构
[1] Macau Univ Sci & Technol, Fac Innovat Engn, Sch Comp Sci & Engn, Macau, Peoples R China
[2] Guangxi Key Lab Machine Vis & Intelligent Control, Wuzhou 543002, Peoples R China
[3] Univ Svizzera Italiana, Dept Informat, CH-6962 Lugano, Switzerland
[4] CEI High Tech Res Inst Co Ltd, Macau, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-task classifier; hierarchical structure; bidirectional joint structure; MASSIVE dataset;
D O I
10.3390/math11244895
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The MASSIVE dataset is a spoken-language comprehension resource package for slot filling, intent classification, and virtual assistant evaluation tasks. It contains multi-language utterances from human beings communicating with a virtual assistant. In this paper, we exploited the relationship between intent classification and slot filling to improve the exact match accuracy by proposing five models with hierarchical and bidirectional architectures. There are two variants for hierarchical architectures and three variants for bidirectional architectures. These are the hierarchical concatenation model, the hierarchical attention-based model, the bidirectional max-pooling model, the bidirectional LSTM model, and the bidirectional attention-based model. The results of our models showed a significant improvement in the averaged exact match accuracy. The hierarchical attention-based model improved the accuracy by 1.01 points for the full training dataset. As for the zero-shot setup, we observed that the exact match accuracy increased from 53.43 to 53.91. In this study, we observed that, for multi-task problems, utilizing the relevance between different tasks can help in improving the model's overall performance.
引用
收藏
页数:22
相关论文
共 50 条
  • [11] Multi-task learning approach for utilizing temporal relations in natural language understanding tasks
    Chae-Gyun Lim
    Young-Seob Jeong
    Ho-Jin Choi
    Scientific Reports, 13
  • [12] UAV Path Planning in Multi-Task Environments with Risks through Natural Language Understanding
    Wang, Chang
    Zhong, Zhiwei
    Xiang, Xiaojia
    Zhu, Yi
    Wu, Lizhen
    Yin, Dong
    Li, Jie
    DRONES, 2023, 7 (03)
  • [13] HirMTL: Hierarchical Multi-Task Learning for dense scene understanding
    Luo, Huilan
    Hu, Weixia
    Wei, Yixiao
    He, Jianlong
    Yu, Minghao
    NEURAL NETWORKS, 2025, 181
  • [14] Multi-Task Learning in Natural Language Processing: An Overview
    Chen, Shijie
    Zhang, Yu
    Yang, Qiang
    ACM COMPUTING SURVEYS, 2024, 56 (12)
  • [15] Multi-task Learning of Hierarchical Vision-Language Representation
    Duy-Kien Nguyen
    Okatani, Takayuki
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10484 - 10493
  • [16] A Sequential and Intensive Weighted Language Modeling Scheme for Multi-Task Learning-Based Natural Language Understanding
    Son, Suhyune
    Hwang, Seonjeong
    Bae, Sohyeun
    Park, Soo Jun
    Choi, Jang-Hwan
    APPLIED SCIENCES-BASEL, 2021, 11 (07):
  • [17] Multi-Task Learning for Spoken Language Understanding with Shared Slots
    Li, Xiao
    Wang, Ye-Yi
    Tur, Gokhan
    12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 708 - +
  • [18] A Multi-Task Semantic Communication System for Natural Language Processing
    Sheng, Yucheng
    Li, Fang
    Liang, Le
    Jin, Shi
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [19] Multi-task Learning for Natural Language Generation in Task-Oriented Dialogue
    Zhu, Chenguang
    Zeng, Michael
    Huang, Xuedong
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1261 - 1266
  • [20] Finding Task-specific Subnetworks in Multi-task Spoken Language Understanding Model
    Futami, Hayato
    Arora, Siddhant
    Kashiwagi, Yosuke
    Tsunoo, Emiru
    Watanabe, Shinji
    INTERSPEECH 2024, 2024, : 802 - 806