NegT5: A Cross-Task Text-to-Text Framework for Negation in Question Answering

被引:0
|
作者
Jin, Tao [1 ]
Racharak, Teeradaj [1 ]
Minh Le Nguyen [1 ]
机构
[1] Japan Adv Inst Sci & Technol, Sch Informat Sci, Nomi, Japan
关键词
Negation; SQuAD2.0; T5; Dual Fine-tuning; Cross-labels;
D O I
10.1007/978-981-99-5837-5_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Negation is a fundamental grammatical construct that plays a crucial role in understanding QA tasks. It has been revealed that models trained with SQuAD1 still produce original responses when presented with negated sentences. To mitigate this issue, SQuAD2.0 incorporates a plethora of unanswerable questions to enable pre-trained models to distinguish negative inquiries. In this study, we assess the performance of the model on answerable and unanswerable questions that incorporate negative words and find out that the model's performance on unanswerable negative questions surpasses the baseline. However, the model's performance on answerable negative questions falls short of the baseline. This outcome prompts us to surmise that SQuAD2.0 includes a substantial number of unanswerable questions, but the pattern of these questions is typically limited to the addition of negative adverbs such as "never" and "not". As a result, the trained model tends to produce "unanswerable" responses when confronted with questions that contain negative expressions. To address this issue, we propose a novel framework, called NegT5, which adopts the text-to-text multi-task fine-tuning principle introduced in T5 for making the model able to deal with negation in QA.
引用
收藏
页码:272 / 285
页数:14
相关论文
共 18 条
  • [1] Leveraging Text-to-Text Pretrained Language Models for Question Answering in Chemistry
    Tran, Dan
    Pascazio, Laura
    Akroyd, Jethro
    Mosbach, Sebastian
    Kraft, Markus
    ACS OMEGA, 2024, 9 (12): : 13883 - 13896
  • [2] Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering
    Phakmongkol, Puri
    Vateekul, Peerapon
    APPLIED SCIENCES-BASEL, 2021, 11 (21):
  • [3] A Cross-Task Analysis of Text Span Representations
    Toshniwal, Shubham
    Shi, Haoyue
    Shi, Bowen
    Gao, Lingyu
    Livescu, Karen
    Gimpel, Kevin
    5TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2020), 2020, : 166 - 176
  • [4] FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models
    Chada, Rakesh
    Natarajan, Pradeep
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6081 - 6090
  • [5] Cross-task Knowledge Transfer for Extremely Weakly Supervised Text Classification
    Park, Seongmin
    Kim, Kyungho
    Lee, Jihwa
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5329 - 5341
  • [6] CKD: Cross-Task Knowledge Distillation for Text-to-Image Synthesis
    Yuan, Mingkuan
    Peng, Yuxin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (08) : 1955 - 1968
  • [7] Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations
    Chen, Jifan
    Zhang, Yuhao
    Liu, Lan
    Dong, Rui
    Chen, Xinchi
    Ng, Patrick
    Wang, William Yang
    Huang, Zhiheng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5523 - 5539
  • [8] Ensemble-NQG-T5: Ensemble Neural Question Generation Model Based on Text-to-Text Transfer Transformer
    Hwang, Myeong-Ha
    Shin, Jikang
    Seo, Hojin
    Im, Jeong-Seon
    Cho, Hee
    Lee, Chun-Kwon
    APPLIED SCIENCES-BASEL, 2023, 13 (02):
  • [9] A Coarse-to-Fine Text Matching Framework for Customer Service Question Answering
    Li, Ang
    Liang, Xingwei
    Zhang, Miao
    Wang, Bingbing
    Chen, Guanrong
    Gao, Jun
    Lin, Qihui
    Xu, Ruifeng
    COGNITIVE COMPUTING, ICCC 2022, 2022, 13734 : 39 - 53
  • [10] Transductive Cross-Lingual Scene-Text Visual Question Answering
    Li, Lin
    Zhang, Haohan
    Fang, Zeqin
    Xie, Zhongwei
    Liu, Jianquan
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT VI, 2024, 14452 : 452 - 467