Suppr超能文献

基于深度学习从对比增强CT图像推断肝脏MRI质子密度脂肪分数:一项概念验证研究。

Liver MRI proton density fat fraction inference from contrast enhanced CT images using deep learning: A proof-of-concept study.

作者信息

Nasir Md, Xu Yixi, Hasenstab Kyle, Yechoor Alekhya, Dodhia Rahul, Weeks William B, Ferres Juan Lavista, Cunha Guilherme Moura

机构信息

AI for Good Lab, Microsoft, Redmond, Washington, United States of America.

Department of Mathematics and Statistics, San Diego State University, San Diego, California, United States of America.

出版信息

PLoS One. 2025 Aug 8;20(8):e0328867. doi: 10.1371/journal.pone.0328867. eCollection 2025.

Abstract

Metabolic dysfunction-associated steatotic liver disease (MASLD) is the most common cause of chronic liver disease worldwide, affecting over 30% of the global general population. Its progressive nature and association with other chronic diseases makes early diagnosis important. MRI Proton Density Fat Fraction (PDFF) is the most accurate noninvasive method for quantitatively assessing liver fat but is expensive and has limited availability; accurately quantifying liver fat from more accessible and affordable imaging could potentially improve patient care. This proof-of-concept study explores the feasibility of inferring liver MRI-PDFF values from contrast-enhanced computed tomography (CECT) using deep learning. In this retrospective, cross-sectional study, we analyzed data from living liver donor candidates who had concurrent CECT and MRI-PDFF as part of their pre-surgical workup between April 2021 and October 2022. Manual MRI-PDFF analysis was performed following a standard of clinical care protocol and used as ground truth. After liver segmentation and registration, a deep neural network (DNN) with 3D U-Net architecture was trained using CECT images as single channel input and the concurrent MRI-PDFF images as single channel output. We evaluated performance using mean absolute error (MAE) and root mean squared error (RMSE), and mean errors (defined as the mean difference of results of comparator groups), with 95% confidence intervals (CIs). We used Kappa statistics and Bland-Altman plots to assess agreement between DNN-predicted PDFF and ground truth steatosis grades and PDFF values, respectively. The final study cohort was of 94 patients, mean PDFF = 3.8%, range 0.2-22.3%. When comparing ground truth to segmented reference (MRI-PDFF), our model had an MAE of 0.56, an RMSE of 0.77, and a mean error of 0.06 (-1.75,1.86); when comparing medians of the predicted and reference MRI-PDFF images, our model had an MAE, an RMSE, and a mean error of 2.94, 4.27, and 1.28 (-4.58,7.14), respectively. We found substantial agreement between categorical steatosis grades obtained from DNN-predicted and clinical ground truth PDFF (kappa = 0.75). While its ability to infer exact MRI-PDFF values from CECT images was limited, categorical classification of fat fraction at lower grades was robust, outperforming other prior attempted methods.

摘要

代谢功能障碍相关脂肪性肝病(MASLD)是全球慢性肝病最常见的病因,影响着全球超过30%的普通人群。其渐进性本质以及与其他慢性疾病的关联使得早期诊断至关重要。磁共振成像质子密度脂肪分数(PDFF)是定量评估肝脏脂肪最准确的非侵入性方法,但成本高昂且可用性有限;从更易获取且价格低廉的影像中准确量化肝脏脂肪可能会改善患者护理。这项概念验证研究探索了使用深度学习从对比增强计算机断层扫描(CECT)推断肝脏MRI-PDFF值的可行性。在这项回顾性横断面研究中,我们分析了2021年4月至2022年10月期间作为术前检查一部分同时进行了CECT和MRI-PDFF检查的活体肝供体候选者的数据。按照临床护理方案标准进行手动MRI-PDFF分析,并将其用作金标准。在进行肝脏分割和配准后,使用具有3D U-Net架构的深度神经网络(DNN),以CECT图像作为单通道输入,同时以MRI-PDFF图像作为单通道输出进行训练。我们使用平均绝对误差(MAE)、均方根误差(RMSE)和平均误差(定义为比较组结果的平均差异)以及95%置信区间(CIs)来评估性能。我们使用Kappa统计量和Bland-Altman图分别评估DNN预测的PDFF与金标准脂肪变性分级和PDFF值之间的一致性。最终研究队列包括94名患者,平均PDFF = 3.8%,范围为0.2 - 22.3%。将金标准与分割后的参考值(MRI-PDFF)进行比较时,我们的模型MAE为0.56,RMSE为0.77,平均误差为0.06(-1.75,1.86);在比较预测的和参考的MRI-PDFF图像的中位数时,我们的模型MAE、RMSE和平均误差分别为2.94、4.27和1.28(-4.58,7.14)。我们发现从DNN预测的和临床金标准PDFF获得的分类脂肪变性分级之间存在高度一致性(kappa = 0.75)。虽然其从CECT图像推断精确MRI-PDFF值的能力有限,但较低分级脂肪分数的分类稳健,优于其他先前尝试的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/75a6/12333992/17f38d234004/pone.0328867.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验