• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Advanced multiple document summarization iterative recursive transformer networks and multimodal transformer.

作者信息

Ketineni Sunilkumar, Jayachandran Sheela

机构信息

SCOPE, VIT-AP University, Amaravathi, Andhra Pradesh, India.

出版信息

PeerJ Comput Sci. 2024 Dec 9;10:e2463. doi: 10.7717/peerj-cs.2463. eCollection 2024.

DOI:10.7717/peerj-cs.2463
PMID:39896414
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11784779/
Abstract

The proliferation of digital information necessitates advanced techniques for multiple document summarization, capable of distilling vast textual data efficiently. Traditional approaches often struggle with coherence, integration of multimodal data, and suboptimal learning strategies. To address these challenges, this work introduces novel neural architectures and methodologies. At its core is recursive transformer networks (ReTran), merging recursive neural networks with transformer architectures for superior comprehension of textual dependencies, projecting a 5-10% improvement in ROUGE scores. Cross-modal summarization employs a multimodal transformer with cross-modal attention, amalgamating text, images, and metadata for more holistic summaries, expecting an 8 to 12% enhancement in quality metrics. Actor-critic reinforcement learning refines training by optimizing summary quality, surpassing Q-learning-based strategies by 5-8%. Meta-learning for zero-shot summarization addresses summarizing unseen domains, projecting a 6-10% uptick in performance. Knowledge-enhanced transformer integrates external knowledge for improved semantic coherence, potentially boosting ROUGE scores by 7 to 12%. These advancements not only improve numerical performance but also produce more informative and coherent summaries across diverse domains and modalities. This work represents a significant stride in multiple document summarization, setting a new benchmark for future research and applications.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/a3b07fc054c4/peerj-cs-10-2463-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/bdf3ac1ce65e/peerj-cs-10-2463-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/1c8744bfd171/peerj-cs-10-2463-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/1534ff31b13c/peerj-cs-10-2463-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/2eab81bb7995/peerj-cs-10-2463-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/4e18c83448b2/peerj-cs-10-2463-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/a3b07fc054c4/peerj-cs-10-2463-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/bdf3ac1ce65e/peerj-cs-10-2463-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/1c8744bfd171/peerj-cs-10-2463-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/1534ff31b13c/peerj-cs-10-2463-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/2eab81bb7995/peerj-cs-10-2463-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/4e18c83448b2/peerj-cs-10-2463-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ced6/11784779/a3b07fc054c4/peerj-cs-10-2463-g007.jpg

相似文献

1
Advanced multiple document summarization iterative recursive transformer networks and multimodal transformer.
PeerJ Comput Sci. 2024 Dec 9;10:e2463. doi: 10.7717/peerj-cs.2463. eCollection 2024.
2
Enhancing Persian text summarization through a three-phase fine-tuning and reinforcement learning approach with the mT5 transformer model.通过使用mT5变压器模型的三阶段微调与强化学习方法来增强波斯语文本摘要。
Sci Rep. 2025 Jan 2;15(1):80. doi: 10.1038/s41598-024-78235-3.
3
Unified extractive-abstractive summarization: a hybrid approach utilizing BERT and transformer models for enhanced document summarization.统一提取-抽象摘要:一种利用BERT和Transformer模型增强文档摘要的混合方法。
PeerJ Comput Sci. 2024 Nov 18;10:e2424. doi: 10.7717/peerj-cs.2424. eCollection 2024.
4
Integrating particle swarm optimization with backtracking search optimization feature extraction with two-dimensional convolutional neural network and attention-based stacked bidirectional long short-term memory classifier for effective single and multi-document summarization.将粒子群优化与回溯搜索优化相结合,用于二维卷积神经网络的特征提取,并采用基于注意力的堆叠双向长短期记忆分类器进行有效的单文档和多文档摘要。
PeerJ Comput Sci. 2024 Dec 12;10:e2435. doi: 10.7717/peerj-cs.2435. eCollection 2024.
5
Multimodal Abstractive Summarization using bidirectional encoder representations from transformers with attention mechanism.使用带有注意力机制的变换器双向编码器表示的多模态抽象摘要
Heliyon. 2024 Feb 18;10(4):e26162. doi: 10.1016/j.heliyon.2024.e26162. eCollection 2024 Feb 29.
6
Exploiting Intersentence Information for Better Question-Driven Abstractive Summarization: Algorithm Development and Validation.利用句间信息实现更好的问题驱动摘要生成:算法开发与验证
JMIR Med Inform. 2022 Aug 15;10(8):e38052. doi: 10.2196/38052.
7
Knowledge-enhanced Graph Topic Transformer for Explainable Biomedical Text Summarization.用于可解释生物医学文本摘要的知识增强图主题变换器
IEEE J Biomed Health Inform. 2023 Aug 23;PP. doi: 10.1109/JBHI.2023.3308064.
8
Flight of the PEGASUS? Comparing Transformers on Few-Shot and Zero-Shot Multi-document Abstractive Summarization.飞马座的飞行?少样本和零样本多文档摘要生成任务中Transformer模型的比较
Proc Int Conf Comput Ling. 2020 Dec;2020:5640-5646.
9
Sports competition tactical analysis model of cross-modal transfer learning intelligent robot based on Swin Transformer and CLIP.基于Swin Transformer和CLIP的跨模态迁移学习智能机器人体育竞赛战术分析模型
Front Neurorobot. 2023 Oct 30;17:1275645. doi: 10.3389/fnbot.2023.1275645. eCollection 2023.
10
Extractive text summarization model based on advantage actor-critic and graph matrix methodology.基于优势行动者-评论者和图矩阵方法的抽取式文本摘要模型。
Math Biosci Eng. 2023 Jan;20(1):1488-1504. doi: 10.3934/mbe.2023067. Epub 2022 Oct 31.

本文引用的文献

1
CovSumm: an unsupervised transformer-cum-graph-based hybrid document summarization model for CORD-19.CovSumm:一种用于CORD-19的基于无监督Transformer和图的混合文档摘要模型。
J Supercomput. 2023 Apr 26:1-23. doi: 10.1007/s11227-023-05291-3.
2
Single document text summarization addressed with a cat swarm optimization approach.基于猫群优化算法的单文档文本摘要
Appl Intell (Dordr). 2023;53(10):12268-12287. doi: 10.1007/s10489-022-04149-0. Epub 2022 Sep 24.