Hierarchical and Bidirectional Joint Multi-Task Classifiers for Natural Language Understanding

被引:0
|
作者
Ji, Xiaoyu [1 ,2 ]
Hu, Wanyang [3 ]
Liang, Yanyan [1 ,4 ]
机构
[1] Macau Univ Sci & Technol, Fac Innovat Engn, Sch Comp Sci & Engn, Macau, Peoples R China
[2] Guangxi Key Lab Machine Vis & Intelligent Control, Wuzhou 543002, Peoples R China
[3] Univ Svizzera Italiana, Dept Informat, CH-6962 Lugano, Switzerland
[4] CEI High Tech Res Inst Co Ltd, Macau, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-task classifier; hierarchical structure; bidirectional joint structure; MASSIVE dataset;
D O I
10.3390/math11244895
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The MASSIVE dataset is a spoken-language comprehension resource package for slot filling, intent classification, and virtual assistant evaluation tasks. It contains multi-language utterances from human beings communicating with a virtual assistant. In this paper, we exploited the relationship between intent classification and slot filling to improve the exact match accuracy by proposing five models with hierarchical and bidirectional architectures. There are two variants for hierarchical architectures and three variants for bidirectional architectures. These are the hierarchical concatenation model, the hierarchical attention-based model, the bidirectional max-pooling model, the bidirectional LSTM model, and the bidirectional attention-based model. The results of our models showed a significant improvement in the averaged exact match accuracy. The hierarchical attention-based model improved the accuracy by 1.01 points for the full training dataset. As for the zero-shot setup, we observed that the exact match accuracy increased from 53.43 to 53.91. In this study, we observed that, for multi-task problems, utilizing the relevance between different tasks can help in improving the model's overall performance.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods
    Zhang, Zhihan
    Yu, Wenhao
    Yu, Mengxia
    Guo, Zhichun
    Jiang, Meng
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 943 - 956
  • [32] HFedMTL: Hierarchical Federated Multi-Task Learning
    Yi, Xingfu
    Li, Rongpeng
    Peng, Chenghui
    Wu, Jianjun
    Zhao, Zhifeng
    2022 IEEE 33RD ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2022,
  • [33] Compressed Hierarchical Representations for Multi-Task Learning and Task Clustering
    de Freitas, Joao Machado
    Berg, Sebastian
    Geiger, Bernhard C.
    Muecke, Manfred
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] HIERARCHICAL MULTI-TASK LEARNING VIA TASK AFFINITY GROUPINGS
    Srivastava, Siddharth
    Bhugra, Swati
    Kaushik, Vinay
    Lall, Brejesh
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3289 - 3293
  • [35] MVP: Multi-task Supervised Pre-training for Natural Language Generation
    Tang, Tianyi
    Li, Junyi
    Zhao, Wayne Xin
    Wen, Ji-Rong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8758 - 8794
  • [36] The aLS-SVM based multi-task learning classifiers
    Lu, Liyun
    Lin, Qiang
    Pei, Huimin
    Zhong, Ping
    APPLIED INTELLIGENCE, 2018, 48 (08) : 2393 - 2407
  • [37] The aLS-SVM based multi-task learning classifiers
    Liyun Lu
    Qiang Lin
    Huimin Pei
    Ping Zhong
    Applied Intelligence, 2018, 48 : 2393 - 2407
  • [38] Multi-task Attribute Joint Feature Learning
    Chang, Lu
    Fang, Yuchun
    Jiang, Xiaoda
    BIOMETRIC RECOGNITION, CCBR 2015, 2015, 9428 : 193 - 200
  • [39] Joint multi-task cascade for instance segmentation
    Yaole Wen
    Fuyuan Hu
    Jinchang Ren
    Xinru Shang
    Linyan Li
    Xuefeng Xi
    Journal of Real-Time Image Processing, 2020, 17 : 1983 - 1989
  • [40] Multi-task Research and Research Joint Ventures
    La Manna, Manfredi M.
    B E JOURNAL OF THEORETICAL ECONOMICS, 2013, 13 (01): : 59 - 77