Abstractive text summarization: State of the art, challenges, and improvements

被引:3
|
作者
Shakil, Hassan [1 ]
Farooq, Ahmad [2 ]
Kalita, Jugal [1 ]
机构
[1] Univ Colorado, Dept Comp Sci, Colorado Springs, CO 80918 USA
[2] Univ Arkansas, Dept Elect & Comp Engn, Little Rock, AR 72204 USA
基金
美国国家科学基金会;
关键词
Automatic summarization; Abstractive summarization; Extractive summarization; Knowledge representation; Text generation; KNOWLEDGE;
D O I
10.1016/j.neucom.2024.128255
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Specifically focusing on the landscape of abstractive text summarization, as opposed to extractive techniques, this survey presents a comprehensive overview, delving into state-of-the-art techniques, prevailing challenges, and prospective research directions. We categorize the techniques into traditional sequenceto-sequence models, pre-trained large language models, reinforcement learning, hierarchical methods, and multi-modal summarization. Unlike prior works that did not examine complexities, scalability and comparisons of techniques in detail, this review takes a comprehensive approach encompassing state-of-the-art methods, challenges, solutions, comparisons, limitations and charts out future improvements - providing researchers an extensive overview to advance abstractive summarization research. We provide vital comparison tables across techniques categorized - offering insights into model complexity, scalability and appropriate applications. The paper highlights challenges such as inadequate meaning representation, factual consistency, controllable text summarization, cross-lingual summarization, and evaluation metrics, among others. Solutions leveraging knowledge incorporation and other innovative strategies are proposed to address these challenges. The paper concludes by highlighting emerging research areas like factual inconsistency, domain-specific, cross-lingual, multilingual, and long-document summarization, as well as handling noisy data. Our objective is to provide researchers and practitioners with a structured overview of the domain, enabling them to better understand the current landscape and identify potential areas for further research and improvement.
引用
收藏
页数:28
相关论文
共 50 条
  • [31] Sentence salience contrastive learning for abstractive text summarization
    Huang, Ying
    Li, Zhixin
    Chen, Zhenbin
    Zhang, Canlong
    Ma, Huifang
    NEUROCOMPUTING, 2024, 593
  • [32] Abstractive Text Summarization Using Enhanced Attention Model
    Roul, Rajendra Kumar
    Joshi, Pratik Madhav
    Sahoo, Jajati Keshari
    INTELLIGENT HUMAN COMPUTER INTERACTION (IHCI 2019), 2020, 11886 : 63 - 76
  • [33] IWM-LSTM encoder for abstractive text summarization
    Gangundi R.
    Sridhar R.
    Multimedia Tools and Applications, 2025, 84 (09) : 5883 - 5904
  • [34] Abstractive Arabic Text Summarization Based on Deep Learning
    Wazery, Y. M.
    Saleh, Marwa E.
    Alharbi, Abdullah
    Ali, Abdelmgeid A.
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [35] Turkish abstractive text document summarization using text to text transfer transformer
    Ay, Betul
    Ertam, Fatih
    Fidan, Guven
    Aydin, Galip
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 68 : 1 - 13
  • [36] Multi-Fact Correction in Abstractive Text Summarization
    Dong, Yue
    Wang, Shuohang
    Gan, Zhe
    Cheng, Yu
    Cheung, Jackie Chi Kit
    Liu, Jingjing
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 9320 - 9331
  • [37] Neural Abstractive Summarization for Long Text and Multiple Tables
    Liu, Shuaiqi
    Cao, Jiannong
    Deng, Zhongfen
    Zhao, Wenting
    Yang, Ruosong
    Wen, Zhiyuan
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (06) : 2572 - 2586
  • [38] A Novel Framework for Semantic Oriented Abstractive Text Summarization
    Moratanch, N.
    Chitrakala, S.
    JOURNAL OF WEB ENGINEERING, 2018, 17 (08): : 675 - 716
  • [39] Keyword-Aware Encoder for Abstractive Text Summarization
    Hu, Tianxiang
    Liang, Jingxi
    Ye, Wei
    Zhang, Shikun
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II, 2021, 12682 : 37 - 52
  • [40] Abstractive Text Summarization Using the BRIO Training Paradigm
    Khang Nhut Lam
    Thieu Gia Doan
    Khang Thua Pham
    Kalita, Jugal
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 92 - 99