Suppr超能文献

RecON:用于无传感器徒手 3D 超声重建的在线学习。

RecON: Online learning for sensorless freehand 3D ultrasound reconstruction.

机构信息

National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, China.

Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK.

出版信息

Med Image Anal. 2023 Jul;87:102810. doi: 10.1016/j.media.2023.102810. Epub 2023 Apr 5.

Abstract

Sensorless freehand 3D ultrasound (US) reconstruction based on deep networks shows promising advantages, such as large field of view, relatively high resolution, low cost, and ease of use. However, existing methods mainly consider vanilla scan strategies with limited inter-frame variations. These methods thus are degraded on complex but routine scan sequences in clinics. In this context, we propose a novel online learning framework for freehand 3D US reconstruction under complex scan strategies with diverse scanning velocities and poses. First, we devise a motion-weighted training loss in training phase to regularize the scan variation frame-by-frame and better mitigate the negative effects of uneven inter-frame velocity. Second, we effectively drive online learning with local-to-global pseudo supervisions. It mines both the frame-level contextual consistency and the path-level similarity constraint to improve the inter-frame transformation estimation. We explore a global adversarial shape before transferring the latent anatomical prior as supervision. Third, we build a feasible differentiable reconstruction approximation to enable the end-to-end optimization of our online learning. Experimental results illustrate that our freehand 3D US reconstruction framework outperformed current methods on two large, simulated datasets and one real dataset. In addition, we applied the proposed framework to clinical scan videos to further validate its effectiveness and generalizability.

摘要

基于深度网络的无传感器自由手 3D 超声(US)重建具有大视场、相对较高的分辨率、低成本和易于使用等优势。然而,现有的方法主要考虑具有有限帧间变化的香草扫描策略。因此,这些方法在临床上复杂但常规的扫描序列中表现不佳。在这种情况下,我们提出了一种新的在线学习框架,用于在具有不同扫描速度和姿势的复杂扫描策略下进行自由手 3D US 重建。首先,我们在训练阶段设计了一种运动加权训练损失,以逐帧正则化扫描变化,更好地减轻帧间速度不均匀的负面影响。其次,我们通过局部到全局的伪监督有效地驱动在线学习。它挖掘了帧级别的上下文一致性和路径级别的相似性约束,以提高帧间变换估计。在传递潜在解剖先验作为监督之前,我们构建了一个可行的可微分重建近似,以实现我们的在线学习的端到端优化。实验结果表明,我们的自由手 3D US 重建框架在两个大型模拟数据集和一个真实数据集上优于当前方法。此外,我们将提出的框架应用于临床扫描视频,以进一步验证其有效性和通用性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验