Suppr超能文献

证据说明了什么?有助于理解生物医学文献的模型。

What Does the Evidence Say? Models to Help Make Sense of the Biomedical Literature.

作者信息

Wallace Byron C

机构信息

Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA.

出版信息

IJCAI (U S). 2019 Aug;2019:6416-6420. doi: 10.24963/ijcai.2019/899.

Abstract

Ideally decisions regarding medical treatments would be informed by the totality of the available evidence. The best evidence we currently have is in published natural language articles describing the conduct and results of clinical trials. Because these are unstructured, it is difficult for domain experts (e.g., physicians) to sort through and appraise the evidence pertaining to a given clinical question. Natural language technologies have the potential to improve access to the evidence via semi-automated processing of the biomedical literature. In this brief paper I highlight work on developing tasks, corpora, and models to support semi-automated evidence retrieval and extraction. The aim is to design models that can consume articles describing clinical trials and automatically extract from these key clinical variables and findings, and estimate their reliability. Completely automating 'machine reading' of evidence remains a distant aim given current technologies; the more immediate hope is to use such technologies to help domain experts access and make sense of unstructured biomedical evidence more efficiently, with the ultimate aim of improving patient care. Aside from their practical importance, these tasks pose core NLP challenges that directly motivate methodological innovation.

摘要

理想情况下,医疗治疗决策应以所有现有证据为依据。我们目前拥有的最佳证据来自已发表的自然语言文章,这些文章描述了临床试验的开展情况和结果。由于这些文章是非结构化的,领域专家(如医生)很难梳理和评估与特定临床问题相关的证据。自然语言技术有潜力通过对生物医学文献的半自动处理来改善对证据的获取。在这篇简短的论文中,我重点介绍了在开发任务、语料库和模型以支持半自动证据检索和提取方面所做的工作。目的是设计出能够读取描述临床试验的文章,并自动从中提取关键临床变量和研究结果,并评估其可靠性的模型。鉴于当前技术水平,完全自动化的证据“机器阅读”仍是一个遥远的目标;更直接的希望是利用此类技术帮助领域专家更高效地获取和理解非结构化生物医学证据,最终目标是改善患者护理。除了其实际重要性外,这些任务还带来了核心自然语言处理挑战,直接推动了方法创新。

相似文献

1
What Does the Evidence Say? Models to Help Make Sense of the Biomedical Literature.
IJCAI (U S). 2019 Aug;2019:6416-6420. doi: 10.24963/ijcai.2019/899.
2
A comparison of word embeddings for the biomedical natural language processing.
J Biomed Inform. 2018 Nov;87:12-20. doi: 10.1016/j.jbi.2018.09.008. Epub 2018 Sep 12.
3
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.
J Med Internet Res. 2015 Aug 31;17(8):e212. doi: 10.2196/jmir.4612.
4
SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions.
Artif Intell Med. 2020 Jan;102:101767. doi: 10.1016/j.artmed.2019.101767. Epub 2019 Nov 28.
6
Towards a characterization of apparent contradictions in the biomedical literature using context analysis.
J Biomed Inform. 2019 Oct;98:103275. doi: 10.1016/j.jbi.2019.103275. Epub 2019 Aug 29.
7
Automated ontology generation framework powered by linked biomedical ontologies for disease-drug domain.
Comput Methods Programs Biomed. 2018 Oct;165:117-128. doi: 10.1016/j.cmpb.2018.08.010. Epub 2018 Aug 16.
8
Automating Biomedical Evidence Synthesis: RobotReviewer.
Proc Conf Assoc Comput Linguist Meet. 2017 Jul;2017:7-12. doi: 10.18653/v1/P17-4002.
9
A systematic review of natural language processing for classification tasks in the field of incident reporting and adverse event analysis.
Int J Med Inform. 2019 Dec;132:103971. doi: 10.1016/j.ijmedinf.2019.103971. Epub 2019 Oct 5.
10
On the Construction of Multilingual Corpora for Clinical Text Mining.
Stud Health Technol Inform. 2020 Jun 16;270:347-351. doi: 10.3233/SHTI200180.

引用本文的文献

本文引用的文献

1
Machine learning to help researchers evaluate biases in clinical trials: a prospective, randomized user study.
BMC Med Inform Decis Mak. 2019 May 8;19(1):96. doi: 10.1186/s12911-019-0814-z.
2
Syntactic Patterns Improve Information Extraction for Medical Search.
Proc Conf. 2018 Jun;2018(Short Paper):371-377.
4
Prioritising references for systematic reviews with RobotAnalyst: A user study.
Res Synth Methods. 2018 Sep;9(3):470-488. doi: 10.1002/jrsm.1311. Epub 2018 Jul 30.
5
Machine learning for identifying Randomized Controlled Trials: An evaluation and practitioner's guide.
Res Synth Methods. 2018 Dec;9(4):602-614. doi: 10.1002/jrsm.1287. Epub 2018 Feb 7.
6
Aggregating and Predicting Sequence Labels from Crowd Annotations.
Proc Conf Assoc Comput Linguist Meet. 2017;2017:299-309. doi: 10.18653/v1/P17-1028.
7
Automating Biomedical Evidence Synthesis: RobotReviewer.
Proc Conf Assoc Comput Linguist Meet. 2017 Jul;2017:7-12. doi: 10.18653/v1/P17-4002.
8
An exploration of crowdsourcing citation screening for systematic reviews.
Res Synth Methods. 2017 Sep;8(3):366-386. doi: 10.1002/jrsm.1252. Epub 2017 Jul 4.
9
10
Rationale-Augmented Convolutional Neural Networks for Text Classification.
Proc Conf Empir Methods Nat Lang Process. 2016 Nov;2016:795-804. doi: 10.18653/v1/d16-1076.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验