Suppr超能文献

使用具有引导解码器结构的注意力门双 U-Net 对胎儿生物计量参数进行分割和估计。

Segmentation and Estimation of Fetal Biometric Parameters using an Attention Gate Double U-Net with Guided Decoder Architecture.

机构信息

Department of Applied Mechanics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, 211004, Uttar Pradesh, India.

Kamra Ultrasound Centre and United Diagnostics, Prayagraj, 211002, Uttar Pradesh, India.

出版信息

Comput Biol Med. 2024 Sep;180:109000. doi: 10.1016/j.compbiomed.2024.109000. Epub 2024 Aug 11.

Abstract

The fetus's health is evaluated with the biometric parameters obtained from the low-resolution ultrasound images. The accuracy of biometric parameters in existing protocols typically depends on conventional image processing approaches and hence, is prone to error. This study introduces the Attention Gate Double U-Net with Guided Decoder (ADU-GD) model specifically crafted for fetal biometric parameter prediction. The attention network and guided decoder are specifically designed to dynamically merge local features with their global dependencies, enhancing the precision of parameter estimation. The ADU-GD displays superior performance with Mean Absolute Error of 0.99 mm and segmentation accuracy of 99.1 % when benchmarked against the well-established models. The proposed model consistently achieved a high Dice index score of about 99.1 ± 0.8, with a minimal Hausdorff distance of about 1.01 ± 1.07 and a low Average Symmetric Surface Distance of about 0.25 ± 0.21, demonstrating the model's excellence. In a comprehensive evaluation, ADU-GD emerged as a frontrunner, outperforming existing deep-learning models such as Double U-Net, DeepLabv3, FCN-32s, PSPNet, SegNet, Trans U-Net, Swin U-Net, Mask-R2CNN, and RDHCformer models in terms of Mean Absolute Error for crucial fetal dimensions, including Head Circumference, Abdomen Circumference, Femur Length, and BiParietal Diameter. It achieved superior accuracy with MAE values of 2.2 mm, 2.6 mm, 0.6 mm, and 1.2 mm, respectively.

摘要

利用低分辨率超声图像获得的生物计量参数来评估胎儿的健康状况。现有协议中生物计量参数的准确性通常取决于传统的图像处理方法,因此容易出错。本研究引入了专门为胎儿生物计量参数预测设计的 Attention Gate Double U-Net with Guided Decoder (ADU-GD) 模型。注意力网络和引导解码器专门设计用于动态融合局部特征及其全局依赖性,从而提高参数估计的精度。ADU-GD 在与成熟模型进行基准测试时,其平均绝对误差为 0.99mm,分割精度为 99.1%,表现出卓越的性能。与现有的深度学习模型(如 Double U-Net、DeepLabv3、FCN-32s、PSPNet、SegNet、Trans U-Net、Swin U-Net、Mask-R2CNN 和 RDHCformer 模型)相比,该模型的 Dice 指数得分始终保持在 99.1±0.8 左右,Hausdorff 距离约为 1.01±1.07,平均对称表面距离约为 0.25±0.21,表现出色。在全面评估中,ADU-GD 脱颖而出,在关键胎儿尺寸(包括头围、腹围、股骨长度和双顶径)的平均绝对误差方面优于现有的深度学习模型,例如 Double U-Net、DeepLabv3、FCN-32s、PSPNet、SegNet、Trans U-Net、Swin U-Net、Mask-R2CNN 和 RDHCformer 模型。它的 MAE 值分别为 2.2mm、2.6mm、0.6mm 和 1.2mm,实现了更高的准确性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验