Exploring Abstractive Text Summarization: Methods, Dataset, Evaluation, and Emerging Challenges

被引:0
|
作者
Sunusi, Yusuf [1 ]
Omar, Nazlia [1 ]
Zakaria, Lailatul Qadri [1 ]
机构
[1] Univ Kebangsaan Malaysia, Ctr Artificial Intelligence Technol, Bangi 43600, Malaysia
关键词
Abstractive text summarization; systematic literature review; natural language processing; evaluation metrics; dataset; computation linguistics; MODEL; RNN;
D O I
10.14569/IJACSA.2024.01507130
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
-The latest advanced models for abstractive summarization, which utilize encoder-decoder frameworks, produce exactly one summary for each source text. This systematic literature review (SLR) comprehensively examines the recent advancements in abstractive text summarization (ATS), a pivotal area in natural language processing (NLP) that aims to generate concise and coherent summaries from extensive text sources. We delve into the evolution of ATS, focusing on key aspects such as encoder-decoder architectures, innovative mechanisms like attention and pointer-generator models, training and optimization methods, datasets, and evaluation metrics. Our review analyzes a wide range of studies, highlighting the transition from traditional sequence-to-sequence models to more advanced approaches like Transformer-based architectures. We explore the integration of mechanisms such as attention, which enhances model interpretability and effectiveness, and pointer-generator networks, which adeptly balance between copying and generating text. The review also addresses the challenges in training these models, including issues related to dataset quality and diversity, particularly in low-resource languages. A critical analysis of evaluation metrics reveals a heavy reliance on ROUGE scores, prompting a discussion on the need for more nuanced evaluation methods that align closely with human judgment. Additionally, we identify and discuss emerging research gaps, such as the need for effective summary length control and the handling of model hallucination, which are crucial for the practical application of ATS. This SLR not only synthesizes current research trends and methodologies in ATS, but also provides insights into future directions, underscoring the importance of continuous innovation in model development, dataset enhancement, and evaluation strategies. Our findings aim to guide researchers and practitioners in navigating the evolving landscape of abstractive text summarization and in identifying areas ripe for future exploration and development.
引用
收藏
页码:1340 / 1357
页数:18
相关论文
共 50 条
  • [41] Unsupervised Abstractive Text Summarization with Length Controlled Autoencoder
    Dugar, Abhinav
    Singh, Gaurav
    Navyasree, B.
    Kumar, Anand M.
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [42] Abstractive Text Summarization Based on Semantic Alignment Network
    Wu S.
    Huang D.
    Li J.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2021, 57 (01): : 1 - 6
  • [43] Sentence salience contrastive learning for abstractive text summarization
    Huang, Ying
    Li, Zhixin
    Chen, Zhenbin
    Zhang, Canlong
    Ma, Huifang
    NEUROCOMPUTING, 2024, 593
  • [44] Abstractive Text Summarization Using Enhanced Attention Model
    Roul, Rajendra Kumar
    Joshi, Pratik Madhav
    Sahoo, Jajati Keshari
    INTELLIGENT HUMAN COMPUTER INTERACTION (IHCI 2019), 2020, 11886 : 63 - 76
  • [45] IWM-LSTM encoder for abstractive text summarization
    Gangundi R.
    Sridhar R.
    Multimedia Tools and Applications, 2025, 84 (09) : 5883 - 5904
  • [46] Abstractive Arabic Text Summarization Based on Deep Learning
    Wazery, Y. M.
    Saleh, Marwa E.
    Alharbi, Abdullah
    Ali, Abdelmgeid A.
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [47] Turkish abstractive text document summarization using text to text transfer transformer
    Ay, Betul
    Ertam, Fatih
    Fidan, Guven
    Aydin, Galip
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 68 : 1 - 13
  • [48] BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization
    Sharma, Eva
    Li, Chen
    Wang, Lu
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2204 - 2213
  • [49] Multi-Fact Correction in Abstractive Text Summarization
    Dong, Yue
    Wang, Shuohang
    Gan, Zhe
    Cheng, Yu
    Cheung, Jackie Chi Kit
    Liu, Jingjing
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 9320 - 9331
  • [50] Neural Abstractive Summarization for Long Text and Multiple Tables
    Liu, Shuaiqi
    Cao, Jiannong
    Deng, Zhongfen
    Zhao, Wenting
    Yang, Ruosong
    Wen, Zhiyuan
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (06) : 2572 - 2586