Multi-task learning approach for utilizing temporal relations in natural language understanding tasks

被引:0
|
作者
Lim, Chae-Gyun [1 ]
Jeong, Young-Seob [2 ]
Choi, Ho-Jin [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon 34141, South Korea
[2] Chungbuk Natl Univ, Dept Comp Engn, Cheongju 28644, South Korea
关键词
D O I
10.1038/s41598-023-35009-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Various studies have been conducted on multi-task learning techniques in natural language understanding (NLU), which build a model capable of processing multiple tasks and providing generalized performance. Most documents written in natural languages contain time-related information. It is essential to recognize such information accurately and utilize it to understand the context and overall content of a document while performing NLU tasks. In this study, we propose a multi-task learning technique that includes a temporal relation extraction task in the training process of NLU tasks such that the trained model can utilize temporal context information from the input sentences. To utilize the characteristics of multi-task learning, an additional task that extracts temporal relations from given sentences was designed, and the multi-task model was configured to learn in combination with the existing NLU tasks on Korean and English datasets. Performance differences were analyzed by combining NLU tasks to extract temporal relations. The accuracy of the single task for temporal relation extraction is 57.8 and 45.1 for Korean and English, respectively, and improves up to 64.2 and 48.7 when combined with other NLU tasks. The experimental results confirm that extracting temporal relations can improve its performance when combined with other NLU tasks in multi-task learning, compared to dealing with it individually. Also, because of the differences in linguistic characteristics between Korean and English, there are different task combinations that positively affect extracting the temporal relations.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods
    Zhang, Zhihan
    Yu, Wenhao
    Yu, Mengxia
    Guo, Zhichun
    Jiang, Meng
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 943 - 956
  • [32] Macular: a Multi-Task Adversarial Framework for Cross-Lingual Natural Language Understanding
    Wang, Haoyu
    Wang, Yaqing
    Wu, Feijie
    Xue, Hongfei
    Gao, Jing
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5061 - 5070
  • [33] Robust Task Grouping with Representative Tasks for Clustered Multi-Task Learning
    Yao, Yaqiang
    Cao, Jie
    Chen, Huanhuan
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1408 - 1417
  • [34] UAV Path Planning in Multi-Task Environments with Risks through Natural Language Understanding
    Wang, Chang
    Zhong, Zhiwei
    Xiang, Xiaojia
    Zhu, Yi
    Wu, Lizhen
    Yin, Dong
    Li, Jie
    DRONES, 2023, 7 (03)
  • [35] Multi-task learning with Attention : Constructing auxiliary tasks for learning to learn
    Li, Benying
    Dong, Aimei
    2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 145 - 152
  • [36] Framework for Deep Learning-Based Language Models Using Multi-Task Learning in Natural Language Understanding: A Systematic Literature Review and Future Directions
    Samant, Rahul Manohar
    Bachute, Mrinal R.
    Gite, Shilpa
    Kotecha, Ketan
    IEEE ACCESS, 2022, 10 : 17078 - 17097
  • [37] Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks
    Schroeder, Fynn
    Biemann, Chris
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 2971 - 2985
  • [38] Virtual Tasks but Real Gains: Improving Multi-Task Learning
    Pranavan, Theivendiram
    Sim, Terence
    Li, Jianshu
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4829 - 4836
  • [39] Unsupervised Joint Multi-Task Learning of Vision Geometry Tasks
    Jha, Prabhash Kumar
    Tsanev, Doychin
    Lukic, Luka
    2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM WORKSHOPS (IV WORKSHOPS), 2021, : 215 - 221
  • [40] Empirical evaluation of multi-task learning in deep neural networks for natural language processing
    Jianquan Li
    Xiaokang Liu
    Wenpeng Yin
    Min Yang
    Liqun Ma
    Yaohong Jin
    Neural Computing and Applications, 2021, 33 : 4417 - 4428