• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的抽象文本摘要综述

A Comprehensive Survey of Abstractive Text Summarization Based on Deep Learning.

机构信息

State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, China.

Guilin University of Electronic Technology, Guilin, China.

出版信息

Comput Intell Neurosci. 2022 Aug 1;2022:7132226. doi: 10.1155/2022/7132226. eCollection 2022.

DOI:10.1155/2022/7132226
PMID:35958768
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9359827/
Abstract

With the rapid development of the Internet, the massive amount of web textual data has grown exponentially, which has brought considerable challenges to downstream tasks, such as document management, text classification, and information retrieval. Automatic text summarization (ATS) is becoming an extremely important means to solve this problem. The  core of ATS is to mine the gist of the original text and automatically generate a concise and readable summary. Recently, to better balance and develop these two aspects, deep learning (DL)-based abstractive summarization models have been developed. At present, for ATS tasks, almost all state-of-the-art (SOTA) models are based on DL architecture. However, a comprehensive literature survey is still lacking in the field of DL-based abstractive text summarization. To fill this gap, this paper provides researchers with a comprehensive survey of DL-based abstractive summarization. We first give an overview of abstractive summarization and DL. Then, we summarize several typical frameworks of abstractive summarization. After that, we also give a comparison of several popular datasets that are commonly used for training, validation, and testing. We further analyze the performance of several typical abstractive summarization systems on common datasets. Finally, we highlight some open challenges in the abstractive summarization task and outline some future research trends. We hope that these explorations will provide researchers with new insights into DL-based abstractive summarization.

摘要

随着互联网的飞速发展,网络文本数据呈指数级增长,这给文档管理、文本分类和信息检索等下游任务带来了相当大的挑战。自动文本摘要(ATS)正成为解决这一问题的重要手段。ATS 的核心是挖掘原文的要点,并自动生成简洁易读的摘要。最近,为了更好地平衡和发展这两个方面,基于深度学习(DL)的抽象摘要模型得到了发展。目前,几乎所有基于 ATS 任务的最先进(SOTA)模型都是基于 DL 架构的。然而,在基于 DL 的抽象文本摘要领域,仍然缺乏全面的文献调查。为了填补这一空白,本文为研究人员提供了基于 DL 的抽象摘要的全面调查。我们首先概述了抽象摘要和 DL。然后,我们总结了几种典型的抽象摘要框架。之后,我们还比较了几种常用于训练、验证和测试的流行数据集。我们进一步分析了几种典型的抽象摘要系统在常见数据集上的性能。最后,我们强调了抽象摘要任务中的一些开放挑战,并概述了一些未来的研究趋势。我们希望这些探索能为研究人员提供对基于 DL 的抽象摘要的新见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/7ba2d714b256/CIN2022-7132226.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/4f5ff9b4ba05/CIN2022-7132226.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/cae83b387736/CIN2022-7132226.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/b06e58206de2/CIN2022-7132226.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/96f8198f5185/CIN2022-7132226.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/81b8ebf4ca79/CIN2022-7132226.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/75c449fa8c03/CIN2022-7132226.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/7a0987882d18/CIN2022-7132226.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/2598950e891b/CIN2022-7132226.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/52432cda4512/CIN2022-7132226.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/f2d383da5a6a/CIN2022-7132226.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/e39f0138cd52/CIN2022-7132226.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/84dc41282a07/CIN2022-7132226.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/745fe0f2ccff/CIN2022-7132226.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/7ba2d714b256/CIN2022-7132226.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/4f5ff9b4ba05/CIN2022-7132226.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/cae83b387736/CIN2022-7132226.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/b06e58206de2/CIN2022-7132226.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/96f8198f5185/CIN2022-7132226.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/81b8ebf4ca79/CIN2022-7132226.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/75c449fa8c03/CIN2022-7132226.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/7a0987882d18/CIN2022-7132226.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/2598950e891b/CIN2022-7132226.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/52432cda4512/CIN2022-7132226.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/f2d383da5a6a/CIN2022-7132226.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/e39f0138cd52/CIN2022-7132226.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/84dc41282a07/CIN2022-7132226.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/745fe0f2ccff/CIN2022-7132226.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51db/9359827/7ba2d714b256/CIN2022-7132226.014.jpg

相似文献

1
A Comprehensive Survey of Abstractive Text Summarization Based on Deep Learning.基于深度学习的抽象文本摘要综述
Comput Intell Neurosci. 2022 Aug 1;2022:7132226. doi: 10.1155/2022/7132226. eCollection 2022.
2
Abstractive text summarization of low-resourced languages using deep learning.使用深度学习对低资源语言进行摘要性文本总结。
PeerJ Comput Sci. 2023 Jan 13;9:e1176. doi: 10.7717/peerj-cs.1176. eCollection 2023.
3
A data package for abstractive opinion summarization, title generation, and rating-based sentiment prediction for airline reviews.一个用于航空公司评论的抽象意见总结、标题生成和基于评分的情感预测的数据包。
Data Brief. 2023 Sep 1;50:109535. doi: 10.1016/j.dib.2023.109535. eCollection 2023 Oct.
4
Abstractive Arabic Text Summarization Based on Deep Learning.基于深度学习的摘要式阿拉伯文文本总结。
Comput Intell Neurosci. 2022 Jan 11;2022:1566890. doi: 10.1155/2022/1566890. eCollection 2022.
5
Graph-based abstractive biomedical text summarization.基于图的抽象生物医学文本摘要
J Biomed Inform. 2022 Aug;132:104099. doi: 10.1016/j.jbi.2022.104099. Epub 2022 Jun 11.
6
UGDAS: Unsupervised graph-network based denoiser for abstractive summarization in biomedical domain.UGDAS:生物医学领域基于无监督图网络的抽象摘要去噪器。
Methods. 2022 Jul;203:160-166. doi: 10.1016/j.ymeth.2022.03.012. Epub 2022 Apr 2.
7
Flight of the PEGASUS? Comparing Transformers on Few-Shot and Zero-Shot Multi-document Abstractive Summarization.飞马座的飞行?少样本和零样本多文档摘要生成任务中Transformer模型的比较
Proc Int Conf Comput Ling. 2020 Dec;2020:5640-5646.
8
Hierarchical Human-Like Deep Neural Networks for Abstractive Text Summarization.分层类人深度神经网络在摘要文本生成中的应用。
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2744-2757. doi: 10.1109/TNNLS.2020.3008037. Epub 2021 Jun 2.
9
Qualitative Analysis of Text Summarization Techniques and Its Applications in Health Domain.文本摘要技术的定性分析及其在健康领域的应用。
Comput Intell Neurosci. 2022 Feb 9;2022:3411881. doi: 10.1155/2022/3411881. eCollection 2022.
10
Exploiting Intersentence Information for Better Question-Driven Abstractive Summarization: Algorithm Development and Validation.利用句间信息实现更好的问题驱动摘要生成:算法开发与验证
JMIR Med Inform. 2022 Aug 15;10(8):e38052. doi: 10.2196/38052.

引用本文的文献

1
Generative AI Models in Time-Varying Biomedical Data: Scoping Review.时变生物医学数据中的生成式人工智能模型:范围综述
J Med Internet Res. 2025 Mar 10;27:e59792. doi: 10.2196/59792.
2
Brain-model neural similarity reveals abstractive summarization performance.脑模型神经相似性揭示抽象概括性能。
Sci Rep. 2025 Jan 2;15(1):370. doi: 10.1038/s41598-024-84530-w.
3
Information Capsule: A New Approach for Summarizing Medical Information.信息胶囊:一种总结医学信息的新方法。

本文引用的文献

1
MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model.MTQA:基于文本的多类型问答阅读理解模型。
Comput Intell Neurosci. 2021 Feb 18;2021:8810366. doi: 10.1155/2021/8810366. eCollection 2021.
2
Text Semantic Classification of Long Discourses Based on Neural Networks with Improved Focal Loss.基于改进型焦点损失的神经网络的长文本语义分类。
Comput Intell Neurosci. 2021 Jan 7;2021:8845362. doi: 10.1155/2021/8845362. eCollection 2021.
3
Interactive Dual Attention Network for Text Sentiment Classification.
Int J Prev Med. 2024 Oct 18;15:52. doi: 10.4103/ijpvm.ijpvm_254_23. eCollection 2024.
4
Evaluation of a Digital Scribe: Conversation Summarization for Emergency Department Consultation Calls.数字抄写员的评估:急诊科会诊电话的对话总结
Appl Clin Inform. 2024 May 15;15(3):600-11. doi: 10.1055/a-2327-4121.
5
Development and Evaluation of a Digital Scribe: Conversation Summarization Pipeline for Emergency Department Counseling Sessions towards Reducing Documentation Burden.数字书记员的开发与评估:用于急诊科咨询会话的对话摘要流程以减轻文档负担
medRxiv. 2023 Dec 7:2023.12.06.23299573. doi: 10.1101/2023.12.06.23299573.
6
Research on automatic pilot repetition generation method based on deep reinforcement learning.基于深度强化学习的自动驾驶重复生成方法研究
Front Neurorobot. 2023 Oct 11;17:1285831. doi: 10.3389/fnbot.2023.1285831. eCollection 2023.
用于文本情感分类的交互式双注意力网络。
Comput Intell Neurosci. 2020 Nov 3;2020:8858717. doi: 10.1155/2020/8858717. eCollection 2020.
4
SVD-CNN: A Convolutional Neural Network Model with Orthogonal Constraints Based on SVD for Context-Aware Citation Recommendation.SVD-CNN:一种基于奇异值分解(SVD)的具有正交约束的卷积神经网络模型,用于上下文感知引用推荐。
Comput Intell Neurosci. 2020 Oct 22;2020:5343214. doi: 10.1155/2020/5343214. eCollection 2020.
5
A Compressive Sensing Model for Speeding Up Text Classification.用于加速文本分类的压缩感知模型。
Comput Intell Neurosci. 2020 Aug 7;2020:8879795. doi: 10.1155/2020/8879795. eCollection 2020.
6
Movie Review Summarization Using Supervised Learning and Graph-Based Ranking Algorithm.使用监督学习和基于图的排名算法进行电影评论摘要。
Comput Intell Neurosci. 2020 Jun 2;2020:7526580. doi: 10.1155/2020/7526580. eCollection 2020.
7
Rationale-Augmented Convolutional Neural Networks for Text Classification.用于文本分类的基于原理增强的卷积神经网络。
Proc Conf Empir Methods Nat Lang Process. 2016 Nov;2016:795-804. doi: 10.18653/v1/d16-1076.