Suppr超能文献

使用迁移学习方法对乳腺MRI背景实质强化进行全自动分类。

Fully automatic classification of breast MRI background parenchymal enhancement using a transfer learning approach.

作者信息

Borkowski Karol, Rossi Cristina, Ciritsis Alexander, Marcon Magda, Hejduk Patryk, Stieb Sonja, Boss Andreas, Berger Nicole

机构信息

Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland.

出版信息

Medicine (Baltimore). 2020 Jul 17;99(29):e21243. doi: 10.1097/MD.0000000000021243.

Abstract

Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories "minimal," "mild," "moderate," and "marked." The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automatic classification of BPE categories.This IRB-approved retrospective study included 11,769 single MR images from 149 patients. The MR images were derived from the subtraction between the first post-contrast volume and the native T1-weighted images. A hierarchic approach was implemented relying on 2 dCNN models for detection of MR-slices imaging breast tissue and for BPE classification, respectively. Data annotation was performed by 2 board-certified radiologists. The consensus of the 2 radiologists was chosen as reference for BPE classification. The clinical performances of the single readers and of the dCNN were statistically compared using the quadratic Cohen's kappa.Slices depicting the breast were classified with training, validation, and real-world (test) accuracies of 98%, 96%, and 97%, respectively. Over the 4 classes, the BPE classification was reached with mean accuracies of 74% for training, 75% for the validation, and 75% for the real word dataset. As compared to the reference, the inter-reader reliabilities for the radiologists were 0.780 (reader 1) and 0.679 (reader 2). On the other hand, the reliability for the dCNN model was 0.815.Automatic classification of BPE can be performed with high accuracy and support the standardization of tissue classification in MRI.

摘要

在对比增强乳腺磁共振成像(MRI)中,纤维腺组织的显著强化可能会影响病变的检测和分类,并被认为与患乳腺癌的较高风险相关。背景实质强化(BPE)根据BI-RADS图谱定性分为“最小”“轻度”“中度”和“显著”几类。本研究的目的是训练一个深度卷积神经网络(dCNN),用于BPE类别的标准化自动分类。这项经机构审查委员会批准的回顾性研究纳入了来自149名患者的11769张单次MR图像。MR图像源自首次对比剂注射后容积与原始T1加权图像之间的相减。采用了一种分层方法,分别依靠2个dCNN模型来检测乳腺组织的MR切片和进行BPE分类。数据标注由2名获得委员会认证的放射科医生进行。选择2名放射科医生的共识作为BPE分类的参考。使用二次Cohen's kappa对单读者和dCNN 的临床性能进行统计学比较。描绘乳腺的切片在训练、验证和真实世界(测试)中的分类准确率分别为98%、96%和97%。在4个类别中,BPE分类在训练、验证和真实世界数据集中的平均准确率分别为74%、75%和75%。与参考相比,放射科医生之间的读者间可靠性为0.780(读者1)和0.679(读者2)。另一方面,dCNN模型的可靠性为0.815。BPE的自动分类可以高精度地进行,并支持MRI中组织分类的标准化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ce34/7373599/2b133a3f3f84/medi-99-e21243-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验