Suppr超能文献

在胎儿心脏超声筛查中使用可解释人工智能提升医学专业水平。

Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening.

作者信息

Sakai Akira, Komatsu Masaaki, Komatsu Reina, Matsuoka Ryu, Yasutomi Suguru, Dozen Ai, Shozu Kanto, Arakaki Tatsuya, Machino Hidenori, Asada Ken, Kaneko Syuzo, Sekizawa Akihiko, Hamamoto Ryuji

机构信息

Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan.

RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.

出版信息

Biomedicines. 2022 Feb 25;10(3):551. doi: 10.3390/biomedicines10030551.

Abstract

Diagnostic support tools based on artificial intelligence (AI) have exhibited high performance in various medical fields. However, their clinical application remains challenging because of the lack of explanatory power in AI decisions (black box problem), making it difficult to build trust with medical professionals. Nevertheless, visualizing the internal representation of deep neural networks will increase explanatory power and improve the confidence of medical professionals in AI decisions. We propose a novel deep learning-based explainable representation "graph chart diagram" to support fetal cardiac ultrasound screening, which has low detection rates of congenital heart diseases due to the difficulty in mastering the technique. Screening performance improves using this representation from 0.966 to 0.975 for experts, 0.829 to 0.890 for fellows, and 0.616 to 0.748 for residents in the arithmetic mean of area under the curve of a receiver operating characteristic curve. This is the first demonstration wherein examiners used deep learning-based explainable representation to improve the performance of fetal cardiac ultrasound screening, highlighting the potential of explainable AI to augment examiner capabilities.

摘要

基于人工智能(AI)的诊断支持工具在各个医学领域都展现出了高性能。然而,由于AI决策缺乏解释能力(黑箱问题),其临床应用仍然具有挑战性,这使得与医学专业人员建立信任变得困难。尽管如此,可视化深度神经网络的内部表示将增加解释能力,并提高医学专业人员对AI决策的信心。我们提出了一种基于深度学习的新型可解释表示“图表示意图”,以支持胎儿心脏超声筛查,由于掌握该技术存在困难,胎儿先天性心脏病的检出率较低。使用这种表示,专家的筛查性能在接收器操作特征曲线下面积的算术平均值方面从0.966提高到0.975,住院医师从0.616提高到0.748,研究员从0.829提高到0.890。这是首次证明检查人员使用基于深度学习的可解释表示来提高胎儿心脏超声筛查的性能,突出了可解释AI增强检查人员能力的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7da8/8945208/4e29d013a008/biomedicines-10-00551-g0A1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验