Exploring Abstractive Text Summarization: Methods, Dataset, Evaluation, and Emerging Challenges

被引:0
|
作者
Sunusi, Yusuf [1 ]
Omar, Nazlia [1 ]
Zakaria, Lailatul Qadri [1 ]
机构
[1] Univ Kebangsaan Malaysia, Ctr Artificial Intelligence Technol, Bangi 43600, Malaysia
关键词
Abstractive text summarization; systematic literature review; natural language processing; evaluation metrics; dataset; computation linguistics; MODEL; RNN;
D O I
10.14569/IJACSA.2024.01507130
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
-The latest advanced models for abstractive summarization, which utilize encoder-decoder frameworks, produce exactly one summary for each source text. This systematic literature review (SLR) comprehensively examines the recent advancements in abstractive text summarization (ATS), a pivotal area in natural language processing (NLP) that aims to generate concise and coherent summaries from extensive text sources. We delve into the evolution of ATS, focusing on key aspects such as encoder-decoder architectures, innovative mechanisms like attention and pointer-generator models, training and optimization methods, datasets, and evaluation metrics. Our review analyzes a wide range of studies, highlighting the transition from traditional sequence-to-sequence models to more advanced approaches like Transformer-based architectures. We explore the integration of mechanisms such as attention, which enhances model interpretability and effectiveness, and pointer-generator networks, which adeptly balance between copying and generating text. The review also addresses the challenges in training these models, including issues related to dataset quality and diversity, particularly in low-resource languages. A critical analysis of evaluation metrics reveals a heavy reliance on ROUGE scores, prompting a discussion on the need for more nuanced evaluation methods that align closely with human judgment. Additionally, we identify and discuss emerging research gaps, such as the need for effective summary length control and the handling of model hallucination, which are crucial for the practical application of ATS. This SLR not only synthesizes current research trends and methodologies in ATS, but also provides insights into future directions, underscoring the importance of continuous innovation in model development, dataset enhancement, and evaluation strategies. Our findings aim to guide researchers and practitioners in navigating the evolving landscape of abstractive text summarization and in identifying areas ripe for future exploration and development.
引用
收藏
页码:1340 / 1357
页数:18
相关论文
共 50 条
  • [21] HindiSumm: A Hindi Abstractive Summarization Benchmark Dataset
    Singh, Geetanjali
    Mittal, Namita
    Chouhan, Satyendra singh
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (12)
  • [22] Inducing Causal Structure for Abstractive Text Summarization
    Chen, Lu
    Zhang, Ruqing
    Huang, Wei
    Chen, Wei
    Guo, Jiafeng
    Cheng, Xueqi
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 213 - 223
  • [23] Abstractive Text Summarization on Google Search Results
    Patel, Dikshita
    Shah, Nisarg
    Shah, Vrushali
    Hole, Varsha
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS 2020), 2020, : 538 - 543
  • [24] Unsupervised Abstractive Summarization of Bengali Text Documents
    Chowdhury, Radia Rayan
    Nayeem, Mir Tafseer
    Mim, Tahsin Tasnim
    Chowdhury, Md Saifur Rahman
    Jannat, Taufiqul
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 2612 - 2619
  • [25] Japanese abstractive text summarization using BERT
    Iwasaki, Yuuki
    Yamashita, Akihiro
    Konno, Yoko
    Matsubayashi, Katsushi
    2019 INTERNATIONAL CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI), 2019,
  • [26] RUATS: Abstractive Text Summarization for Roman Urdu
    Kaleem, Laraib
    Rahman, Arif Ur
    Moetesum, Momina
    DOCUMENT ANALYSIS SYSTEMS, DAS 2024, 2024, 14994 : 258 - 273
  • [27] Variational Neural Decoder for Abstractive Text Summarization
    Zhao, Huan
    Cao, Jie
    Xu, Mingquan
    Lu, Jian
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2020, 17 (02) : 537 - 552
  • [28] Reinforcement Learning Models for Abstractive Text Summarization
    Buciumas, Sergiu
    PROCEEDINGS OF THE 2019 ANNUAL ACM SOUTHEAST CONFERENCE (ACMSE 2019), 2019, : 270 - 271
  • [29] Abstractive Text Summarization Using Multimodal Information
    Rafi, Shaik
    Das, Ranjita
    2023 10TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE, ISCMI, 2023, : 141 - 145
  • [30] Abstractive Text Summarization via Stacked LSTM
    Siddhartha, Ireddy
    Zhan, Huixin
    Sheng, Victor S.
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 437 - 442