Optimization of the Abstract Text Summarization Model Based on Multi-Task Learning

被引:0
|
作者
Yao, Ben [1 ]
Ding, Gejian [1 ]
机构
[1] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua, Zhejiang, Peoples R China
关键词
Abstract text summarization; BART; Multi-task learning; Factual consistency;
D O I
10.1145/3650400.3650469
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automatic text summtarization technology is a crucial component in the field of natural language processing, utilized to address the demands of processing extensive textual data by effectively extracting key information to enhance task efficiency. With the rise of pretrained language models, abstract text summarization has progressively become mainstream, producing fluent srummaries that encapsulate core content. Nonetheless, abstract text summarization unavoidably faces problems of inconsistency with the original text. This paper introduces a sequence tagging task to achieve multi-task learning for abstract text summarization models. In this sequence tagging task, we meticulously designed annotated datasets at both entity and sentence levels based on an analysis of the XSum dataset, aiming to enhance the factual consistency of generated summaries. Experimental results demonstrate that the optimized BART model yields favorable performance in terms of ROUGE and FactCC metrics.
引用
收藏
页码:424 / 428
页数:5
相关论文
共 50 条
  • [41] Multi-task learning using a hybrid representation for text classification
    Lu, Guangquan
    Gan, Jiangzhang
    Yin, Jian
    Luo, Zhiping
    Li, Bo
    Zhao, Xishun
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (11): : 6467 - 6480
  • [42] Fact Aware Multi-task Learning for Text Coherence Modeling
    Abhishek, Tushar
    Rawat, Daksh
    Gupta, Manish
    Varma, Vasudeva
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 340 - 353
  • [43] Multi-Task Learning for Text-dependent Speaker Verification
    Chen, Nanxin
    Qian, Yanmin
    Yu, Kai
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 185 - 189
  • [44] Model-Protected Multi-Task Learning
    Liang, Jian
    Liu, Ziqi
    Zhou, Jiayu
    Jiang, Xiaoqian
    Zhang, Changshui
    Wang, Fei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (02) : 1002 - 1019
  • [45] Multi-Task Clustering with Model Relation Learning
    Zhang, Xiaotong
    Zhang, Xianchao
    Liu, Han
    Luo, Jiebo
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3132 - 3140
  • [46] Multi-Task Model and Feature Joint Learning
    Li, Ya
    Tian, Xinmei
    Liu, Tongliang
    Tao, Dacheng
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3643 - 3649
  • [47] Multi-Task Multi-View Learning Based on Cooperative Multi-Objective Optimization
    Zhou, Di
    Wang, Jun
    Jiang, Bin
    Guo, Hua
    Li, Yajun
    IEEE ACCESS, 2018, 6 : 19465 - 19477
  • [48] Plausibility-promoting generative adversarial network for abstractive text summarization with multi-task constraint
    Yang, Min
    Wang, Xintong
    Lu, Yao
    Lv, Jianming
    Shen, Ying
    Li, Chengming
    INFORMATION SCIENCES, 2020, 521 : 46 - 61
  • [49] SEBGM: Sentence Embedding Based on Generation Model with multi-task learning
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Cao, Yu
    Peng, Dezhong
    Wang, Xu
    COMPUTER SPEECH AND LANGUAGE, 2024, 87
  • [50] Chinese Named Entity Recognition Model Based on Multi-Task Learning
    Fang, Qin
    Li, Yane
    Feng, Hailin
    Ruan, Yaoping
    APPLIED SCIENCES-BASEL, 2023, 13 (08):