• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Epidemic Question Answering: question generation and entailment for Answer Nugget discovery.疫情问答:答案片段发现的问题生成和蕴涵。
J Am Med Inform Assoc. 2023 Jan 18;30(2):329-339. doi: 10.1093/jamia/ocac222.
2
A question-entailment approach to question answering.问题蕴涵方法在问答中的应用。
BMC Bioinformatics. 2019 Oct 22;20(1):511. doi: 10.1186/s12859-019-3119-4.
3
SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions.SemBioNLQA:一个语义生物医学问答系统,用于检索自然语言问题的准确和理想答案。
Artif Intell Med. 2020 Jan;102:101767. doi: 10.1016/j.artmed.2019.101767. Epub 2019 Nov 28.
4
Consumer health information and question answering: helping consumers find answers to their health-related information needs.消费者健康信息与问答:帮助消费者寻找与其健康相关的信息需求的答案。
J Am Med Inform Assoc. 2020 Feb 1;27(2):194-201. doi: 10.1093/jamia/ocz152.
5
Automatic question answering for multiple stakeholders, the epidemic question answering dataset.面向多利益相关者的自动问答,疫情问答数据集。
Sci Data. 2022 Jul 21;9(1):432. doi: 10.1038/s41597-022-01533-w.
6
List-wise learning to rank biomedical question-answer pairs with deep ranking recursive autoencoders.基于深度排序递归自动编码器的列表式学习对生物医学问答对进行排序。
PLoS One. 2020 Nov 9;15(11):e0242061. doi: 10.1371/journal.pone.0242061. eCollection 2020.
7
BioASQ-QA: A manually curated corpus for Biomedical Question Answering.BioASQ-QA:用于生物医学问答的人工策论文本语料库。
Sci Data. 2023 Mar 27;10(1):170. doi: 10.1038/s41597-023-02068-4.
8
Recognizing Question Entailment for Medical Question Answering.识别医学问答中的问题蕴含关系。
AMIA Annu Symp Proc. 2017 Feb 10;2016:310-318. eCollection 2016.
9
Revealing Opinions for COVID-19 Questions Using a Context Retriever, Opinion Aggregator, and Question-Answering Model: Model Development Study.使用上下文检索器、观点聚合器和问答模型揭示对 COVID-19 问题的看法:模型开发研究。
J Med Internet Res. 2021 Mar 19;23(3):e22860. doi: 10.2196/22860.
10
Information retrieval and question answering: A case study on COVID-19 scientific literature.信息检索与问答:以COVID-19科学文献为例的研究
Knowl Based Syst. 2022 Mar 15;240:108072. doi: 10.1016/j.knosys.2021.108072. Epub 2021 Dec 31.

引用本文的文献

1
Question answering systems for health professionals at the point of care-a systematic review.在护理点为医疗保健专业人员提供问答系统——系统评价。
J Am Med Inform Assoc. 2024 Apr 3;31(4):1009-1024. doi: 10.1093/jamia/ocae015.

本文引用的文献

1
Automatic question answering for multiple stakeholders, the epidemic question answering dataset.面向多利益相关者的自动问答,疫情问答数据集。
Sci Data. 2022 Jul 21;9(1):432. doi: 10.1038/s41597-022-01533-w.
2
Consumer health information and question answering: helping consumers find answers to their health-related information needs.消费者健康信息与问答:帮助消费者寻找与其健康相关的信息需求的答案。
J Am Med Inform Assoc. 2020 Feb 1;27(2):194-201. doi: 10.1093/jamia/ocz152.
3
Learning relevance models for patient cohort retrieval.学习用于患者队列检索的相关性模型。
JAMIA Open. 2018 Oct;1(2):265-275. doi: 10.1093/jamiaopen/ooy010. Epub 2018 Sep 28.
4
Deep learning in neural networks: an overview.神经网络中的深度学习:综述。
Neural Netw. 2015 Jan;61:85-117. doi: 10.1016/j.neunet.2014.09.003. Epub 2014 Oct 13.
5
Open-access MIMIC-II database for intensive care research.用于重症监护研究的开放获取MIMIC-II数据库。
Annu Int Conf IEEE Eng Med Biol Soc. 2011;2011:8315-8. doi: 10.1109/IEMBS.2011.6092050.

疫情问答:答案片段发现的问题生成和蕴涵。

Epidemic Question Answering: question generation and entailment for Answer Nugget discovery.

机构信息

Human Language Technology Research Institute, Department of Computer Science, University of Texas at Dallas, Richardson, Texas, USA.

出版信息

J Am Med Inform Assoc. 2023 Jan 18;30(2):329-339. doi: 10.1093/jamia/ocac222.

DOI:10.1093/jamia/ocac222
PMID:36394232
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9846678/
Abstract

OBJECTIVE

The rapidly growing body of communications during the COVID-19 pandemic posed a challenge to information seekers, who struggled to find answers to their specific and changing information needs. We designed a Question Answering (QA) system capable of answering ad-hoc questions about the COVID-19 disease, its causal virus SARS-CoV-2, and the recommended response to the pandemic.

MATERIALS AND METHODS

The QA system incorporates, in addition to relevance models, automatic generation of questions from relevant sentences. We relied on entailment between questions for (1) pinpointing answers and (2) selecting novel answers early in the list of its results.

RESULTS

The QA system produced state-of-the-art results when processing questions asked by experts (eg, researchers, scientists, or clinicians) and competitive results when processing questions asked by consumers of health information. Although state-of-the-art models for question generation and question entailment were used, more than half of the answers were missed, due to the limitations of the relevance models employed.

DISCUSSION

Although question entailment enabled by automatic question generation is the cornerstone of our QA system's architecture, question entailment did not prove to always be reliable or sufficient in ranking the answers. Question entailment should be enhanced with additional inferential capabilities.

CONCLUSION

The QA system presented in this article produced state-of-the-art results processing expert questions and competitive results processing consumer questions. Improvements should be considered by using better relevance models and enhanced inference methods. Moreover, experts and consumers have different answer expectations, which should be accounted for in future QA development.

摘要

目的

在 COVID-19 大流行期间,通讯数量迅速增加,这给信息搜索者带来了挑战,他们难以找到针对其特定且不断变化的信息需求的答案。我们设计了一个问答(QA)系统,能够回答有关 COVID-19 疾病、其致病病毒 SARS-CoV-2 以及对大流行的建议应对措施的临时问题。

材料与方法

除了相关性模型外,QA 系统还结合了从相关句子中自动生成问题的功能。我们依赖于问题之间的蕴涵关系,用于(1)精确定位答案,(2)在答案列表的早期选择新颖的答案。

结果

当处理专家(例如研究人员、科学家或临床医生)提出的问题时,QA 系统产生了最先进的结果,当处理健康信息消费者提出的问题时,也产生了具有竞争力的结果。尽管使用了最先进的问题生成和问题蕴涵模型,但由于所使用的相关性模型的局限性,仍有一半以上的答案被遗漏。

讨论

尽管自动问题生成所启用的问题蕴涵是 QA 系统架构的基石,但在答案排序方面,问题蕴涵并不总是可靠或充分的。问题蕴涵应通过额外的推理能力加以增强。

结论

本文介绍的 QA 系统在处理专家问题时产生了最先进的结果,在处理消费者问题时也取得了具有竞争力的结果。应考虑使用更好的相关性模型和增强的推理方法来进行改进。此外,专家和消费者对答案有不同的期望,这在未来的 QA 开发中应加以考虑。