Suppr超能文献

自然语言处理研究报告的质量:对放射学报告研究的系统评价。

The reporting quality of natural language processing studies: systematic review of studies of radiology reports.

机构信息

Centre for Clinical Brain Sciences, University of Edinburgh, Chancellor's Building, Little France, Edinburgh, EH16 4TJ, Scotland, UK.

Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, Scotland, UK.

出版信息

BMC Med Imaging. 2021 Oct 2;21(1):142. doi: 10.1186/s12880-021-00671-8.

Abstract

BACKGROUND

Automated language analysis of radiology reports using natural language processing (NLP) can provide valuable information on patients' health and disease. With its rapid development, NLP studies should have transparent methodology to allow comparison of approaches and reproducibility. This systematic review aims to summarise the characteristics and reporting quality of studies applying NLP to radiology reports.

METHODS

We searched Google Scholar for studies published in English that applied NLP to radiology reports of any imaging modality between January 2015 and October 2019. At least two reviewers independently performed screening and completed data extraction. We specified 15 criteria relating to data source, datasets, ground truth, outcomes, and reproducibility for quality assessment. The primary NLP performance measures were precision, recall and F1 score.

RESULTS

Of the 4,836 records retrieved, we included 164 studies that used NLP on radiology reports. The commonest clinical applications of NLP were disease information or classification (28%) and diagnostic surveillance (27.4%). Most studies used English radiology reports (86%). Reports from mixed imaging modalities were used in 28% of the studies. Oncology (24%) was the most frequent disease area. Most studies had dataset size > 200 (85.4%) but the proportion of studies that described their annotated, training, validation, and test set were 67.1%, 63.4%, 45.7%, and 67.7% respectively. About half of the studies reported precision (48.8%) and recall (53.7%). Few studies reported external validation performed (10.8%), data availability (8.5%) and code availability (9.1%). There was no pattern of performance associated with the overall reporting quality.

CONCLUSIONS

There is a range of potential clinical applications for NLP of radiology reports in health services and research. However, we found suboptimal reporting quality that precludes comparison, reproducibility, and replication. Our results support the need for development of reporting standards specific to clinical NLP studies.

摘要

背景

使用自然语言处理(NLP)对放射学报告进行自动化语言分析可以提供有关患者健康和疾病的有价值信息。随着其快速发展,NLP 研究应该具有透明的方法,以允许比较方法和可重复性。本系统评价旨在总结应用 NLP 分析放射学报告的研究的特点和报告质量。

方法

我们在 Google Scholar 上搜索了 2015 年 1 月至 2019 年 10 月期间发表的应用 NLP 分析任何成像方式放射学报告的英文研究。至少有两名审查员独立进行筛选并完成数据提取。我们为质量评估指定了 15 项与数据源、数据集、基准、结果和可重复性相关的标准。主要的 NLP 性能指标是精度、召回率和 F1 分数。

结果

在检索到的 4836 条记录中,我们纳入了 164 项使用 NLP 分析放射学报告的研究。NLP 最常见的临床应用是疾病信息或分类(28%)和诊断监测(27.4%)。大多数研究使用英语放射学报告(86%)。混合成像方式的报告在 28%的研究中使用。肿瘤学(24%)是最常见的疾病领域。大多数研究的数据集大小大于 200(85.4%),但描述其注释、训练、验证和测试集的研究比例分别为 67.1%、63.4%、45.7%和 67.7%。约一半的研究报告了精度(48.8%)和召回率(53.7%)。很少有研究报告进行了外部验证(10.8%)、数据可用性(8.5%)和代码可用性(9.1%)。性能与整体报告质量之间没有关联模式。

结论

NLP 分析放射学报告在医疗保健和研究中具有多种潜在的临床应用。然而,我们发现报告质量不理想,无法进行比较、可重复性和复制。我们的结果支持针对临床 NLP 研究制定特定报告标准的必要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce8/8487512/35fd9d85b016/12880_2021_671_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验