RDMTL: Reverse dictionary model based on multitask learning

被引:0
|
作者
Tian, Sicheng [1 ]
Huang, Shaobin [1 ]
Li, Rongsheng [1 ]
Wei, Chi [1 ]
Liu, Ye [1 ]
机构
[1] Harbin Engn Univ, Coll Comp Sci & Technol, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Reverse dictionary; POS tag; Example sentence; Multitask learning;
D O I
10.1016/j.knosys.2024.111869
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A reverse dictionary is a task of finding appropriate words based on given definitions. Existing studies struggle to capture the subtle differences between words with similar definitions. Motivated by humans' ability to verify their understanding of a word's semantics by making example sentences and recognizing parts of speech, we propose a reverse dictionary model based on multitask learning (RDMTL), which can alleviate the problem. RDMTL comprises a primary task component that extracts different levels of semantic features from definitions and two auxiliary task components that predict part-of-speech tags and generate sentences for target words. Through jointly learning these three tasks, RDMTL can enhance the understanding of definitions and discover subtle differences among words with similar meanings. Moreover, it can generate more accurate and natural example sentences. We evaluate RDMTL on a modified version of the New Oxford dataset and compare its performance with several baseline models. The experimental results show that RDMTL enhances the rank value metric for the reverse dictionary task by 1.02%, the F1 value metric for the part-of-speech classification task by 20.47%, and the SB-4 metric for the sentence generation task by 14.7%. In addition, to analyze the contribution of each component and the impact of multitask learning, we conducted an ablation study. In this study, a new method and perspective for the reverse dictionary task is introduced.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Efficacy of Regularized Multitask Learning Based on SVM Models
    Chen, Shaohan
    Fang, Zhou
    Lu, Sijie
    Gao, Chuanhou
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (03) : 1339 - 1352
  • [42] Offline Reinforcement Learning with Reverse Model-based Imagination
    Wang, Jianhao
    Li, Wenzhe
    Jiang, Haozhe
    Zhu, Guangxiang
    Li, Siyuan
    Zhang, Chongjie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [43] A Multichannel Biomedical Named Entity Recognition Model Based on Multitask Learning and Contextualized Word Representations
    Wei, Hao
    Gao, Mingyuan
    Zhou, Ai
    Chen, Fei
    Qu, Wen
    Zhang, Yijia
    Lu, Mingyu
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2020, 2020
  • [44] Multitask Level-Based Learning Swarm Optimizer
    Chen, Jiangtao
    Wang, Zijia
    Kou, Zheng
    BIOMIMETICS, 2024, 9 (11)
  • [45] Analysis of mobility based COVID-19 epidemic model using Federated Multitask Learning
    Kumaresan, M.
    Kumar, M. Senthil
    Muthukumar, Nehal
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (10) : 9983 - 10005
  • [46] Deep-Learning-Based Multitask Ultrasound Beamforming
    Dahan, Elay
    Cohen, Israel
    INFORMATION, 2023, 14 (10)
  • [47] Towards a Transformer-Based Reverse Dictionary Model for Quality Estimation of Definitions (Student Abstract)
    Guite-Vinet, Julien
    Masse, Alexandre Blondin
    Sadat, Fatiha
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23508 - 23509
  • [48] Wind Power Forecasting Based on WaveNet and Multitask Learning
    Wang, Hao
    Peng, Chen
    Liao, Bolin
    Cao, Xinwei
    Li, Shuai
    SUSTAINABILITY, 2023, 15 (14)
  • [49] Compression-Based Regularization With an Application to Multitask Learning
    Vera, Matias
    Vega, Leonardo Rey
    Piantanida, Pablo
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (05) : 1063 - 1076
  • [50] Tweet Retweet Prediction Based on Deep Multitask Learning
    Jing Wang
    Yue Yang
    Neural Processing Letters, 2022, 54 : 523 - 536