Suppr超能文献

创建并验证一个带有眼动追踪和报告口述功能的胸部 X 射线数据集,用于人工智能开发。

Creation and validation of a chest X-ray dataset with eye-tracking and report dictation for AI development.

机构信息

IBM Research, Almaden Research Center, San Jose, CA, 95120, USA.

Department of Computer Science, Virginia Tech, Blacksburg, VA, 24061, USA.

出版信息

Sci Data. 2021 Mar 25;8(1):92. doi: 10.1038/s41597-021-00863-5.

Abstract

We developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist's dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.

摘要

我们开发了一个丰富的 Chest X-Ray(CXR)图像数据集,以协助研究人员进行人工智能研究。这些数据是使用眼动追踪系统收集的,当时一名放射科医生对 1083 张 CXR 图像进行了审查和报告。该数据集包含以下对齐数据:CXR 图像、转录的放射学报告文本、放射科医生的口述音频和眼动坐标数据。我们希望这个数据集可以为各个研究领域做出贡献,特别是在可解释性和多模态深度学习/机器学习方法方面。此外,疾病分类和定位、自动化放射学报告生成以及人机交互方面的研究人员也可以从这些数据中受益。我们报告了利用眼动数据集生成的注意力图进行的深度学习实验,以展示该数据集的潜在用途。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d695/7994908/c2ca1d209352/41597_2021_863_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验