• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用句间信息实现更好的问题驱动摘要生成:算法开发与验证

Exploiting Intersentence Information for Better Question-Driven Abstractive Summarization: Algorithm Development and Validation.

作者信息

Wang Xin, Wang Jian, Xu Bo, Lin Hongfei, Zhang Bo, Yang Zhihao

机构信息

School of Computer Science and Technology, Dalian University of Technology, Dalian, China.

出版信息

JMIR Med Inform. 2022 Aug 15;10(8):e38052. doi: 10.2196/38052.

DOI:10.2196/38052
PMID:35969463
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9425173/
Abstract

BACKGROUND

Question-driven summarization has become a practical and accurate approach to summarizing the source document. The generated summary should be concise and consistent with the concerned question, and thus, it could be regarded as the answer to the nonfactoid question. Existing methods do not fully exploit question information over documents and dependencies across sentences. Besides, most existing summarization evaluation tools like recall-oriented understudy for gisting evaluation (ROUGE) calculate N-gram overlaps between the generated summary and the reference summary while neglecting the factual consistency problem.

OBJECTIVE

This paper proposes a novel question-driven abstractive summarization model based on transformer, including a two-step attention mechanism and an overall integration mechanism, which can generate concise and consistent summaries for nonfactoid question answering.

METHODS

Specifically, the two-step attention mechanism is proposed to exploit the mutual information both of question to context and sentence over other sentences. We further introduced an overall integration mechanism and a novel pointer network for information integration. We conducted a question-answering task to evaluate the factual consistency between the generated summary and the reference summary.

RESULTS

The experimental results of question-driven summarization on the PubMedQA data set showed that our model achieved ROUGE-1, ROUGE-2, and ROUGE-L measures of 36.01, 15.59, and 30.22, respectively, which is superior to the state-of-the-art methods with a gain of 0.79 (absolute) in the ROUGE-2 score. The question-answering task demonstrates that the generated summaries of our model have better factual constancy. Our method achieved 94.2% accuracy and a 77.57% F1 score.

CONCLUSIONS

Our proposed question-driven summarization model effectively exploits the mutual information among the question, document, and summary to generate concise and consistent summaries.

摘要

背景

问题驱动的摘要生成已成为一种实用且准确的源文档摘要方法。生成的摘要应简洁并与相关问题一致,因此可被视为对非事实性问题的答案。现有方法未充分利用文档中的问题信息以及句子间的依存关系。此外,大多数现有的摘要评估工具,如面向召回率的摘要评估辅助工具(ROUGE),在计算生成的摘要与参考摘要之间的N元语法重叠时,忽略了事实一致性问题。

目的

本文提出一种基于Transformer的新型问题驱动的抽象摘要模型,包括两步注意力机制和整体整合机制,可针对非事实性问答生成简洁且一致的摘要。

方法

具体而言,提出两步注意力机制以利用问题与上下文之间以及句子与其他句子之间的互信息。我们进一步引入了整体整合机制和用于信息整合的新型指针网络。我们进行了问答任务以评估生成的摘要与参考摘要之间的事实一致性。

结果

在PubMedQA数据集上进行的问题驱动摘要生成实验结果表明,我们的模型在ROUGE-1、ROUGE-2和ROUGE-L指标上分别达到了36.01、15.59和30.22,优于现有最先进的方法,在ROUGE-2分数上提高了0.79(绝对值)。问答任务表明,我们模型生成的摘要具有更好的事实稳定性。我们的方法准确率达到94.2%,F1分数为77.57%。

结论

我们提出的问题驱动摘要模型有效利用了问题、文档和摘要之间的互信息,以生成简洁且一致的摘要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/49018c1a823a/medinform_v10i8e38052_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/f2d6ba077477/medinform_v10i8e38052_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/67a8a586f473/medinform_v10i8e38052_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/49018c1a823a/medinform_v10i8e38052_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/f2d6ba077477/medinform_v10i8e38052_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/67a8a586f473/medinform_v10i8e38052_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f834/9425173/49018c1a823a/medinform_v10i8e38052_fig3.jpg

相似文献

1
Exploiting Intersentence Information for Better Question-Driven Abstractive Summarization: Algorithm Development and Validation.利用句间信息实现更好的问题驱动摘要生成:算法开发与验证
JMIR Med Inform. 2022 Aug 15;10(8):e38052. doi: 10.2196/38052.
2
Question-aware transformer models for consumer health question summarization.基于问句感知的 Transformer 模型在消费者健康问句总结中的应用
J Biomed Inform. 2022 Apr;128:104040. doi: 10.1016/j.jbi.2022.104040. Epub 2022 Mar 6.
3
Nonfactoid Question Answering as Query-Focused Summarization With Graph-Enhanced Multihop Inference.非事实性问答作为基于查询聚焦摘要的图增强多跳推理
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11231-11245. doi: 10.1109/TNNLS.2023.3258413. Epub 2024 Aug 5.
4
CovSumm: an unsupervised transformer-cum-graph-based hybrid document summarization model for CORD-19.CovSumm:一种用于CORD-19的基于无监督Transformer和图的混合文档摘要模型。
J Supercomput. 2023 Apr 26:1-23. doi: 10.1007/s11227-023-05291-3.
5
Knowledge-Infused Abstractive Summarization of Clinical Diagnostic Interviews: Framework Development Study.临床诊断访谈的知识注入式摘要生成:框架开发研究
JMIR Ment Health. 2021 May 10;8(5):e20865. doi: 10.2196/20865.
6
Exploring the Efficacy of Large Language Models in Summarizing Mental Health Counseling Sessions: Benchmark Study.探讨大型语言模型在总结心理健康咨询会话中的功效:基准研究。
JMIR Ment Health. 2024 Jul 23;11:e57306. doi: 10.2196/57306.
7
Abstractive text summarization of low-resourced languages using deep learning.使用深度学习对低资源语言进行摘要性文本总结。
PeerJ Comput Sci. 2023 Jan 13;9:e1176. doi: 10.7717/peerj-cs.1176. eCollection 2023.
8
Graph-based biomedical text summarization: An itemset mining and sentence clustering approach.基于图的生物医学文本摘要:一种基于项集挖掘和句子聚类的方法。
J Biomed Inform. 2018 Aug;84:42-58. doi: 10.1016/j.jbi.2018.06.005. Epub 2018 Jun 15.
9
Qualitative Analysis of Text Summarization Techniques and Its Applications in Health Domain.文本摘要技术的定性分析及其在健康领域的应用。
Comput Intell Neurosci. 2022 Feb 9;2022:3411881. doi: 10.1155/2022/3411881. eCollection 2022.
10
Graph-based abstractive biomedical text summarization.基于图的抽象生物医学文本摘要
J Biomed Inform. 2022 Aug;132:104099. doi: 10.1016/j.jbi.2022.104099. Epub 2022 Jun 11.

本文引用的文献

1
Question-driven summarization of answers to consumer health questions.面向消费者健康问题答案的问题驱动式总结。
Sci Data. 2020 Oct 2;7(1):322. doi: 10.1038/s41597-020-00667-z.
2
BioBERT: a pre-trained biomedical language representation model for biomedical text mining.BioBERT:一种用于生物医学文本挖掘的预训练生物医学语言表示模型。
Bioinformatics. 2020 Feb 15;36(4):1234-1240. doi: 10.1093/bioinformatics/btz682.