Torres Helena R, Oliveira Bruno, Fritze Anne, Birdir Cahit, Rudiger Mario, Fonseca Jaime C, Morais Pedro, Vilaca Joao L
IEEE J Biomed Health Inform. 2024 Dec;28(12):7287-7299. doi: 10.1109/JBHI.2024.3440171. Epub 2024 Dec 5.
Objective - Medical image segmentation is essential for several clinical tasks, including diagnosis, surgical and treatment planning, and image-guided interventions. Deep Learning (DL) methods have become the state-of-the-art for several image segmentation scenarios. However, a large and well-annotated dataset is required to effectively train a DL model, which is usually difficult to obtain in clinical practice, especially for 3D images. Methods - In this paper, we proposed Deep-DM, a learning-guided deformable model framework for 3D medical imaging segmentation using limited training data. In the proposed method, an energy function is learned by a Convolutional Neural Network (CNN) and integrated into an explicit deformable model to drive the evolution of an initial surface towards the object to segment. Specifically, the learning-based energy function is iteratively retrieved from localized anatomical representations of the image containing the image information around the evolving surface at each iteration. By focusing on localized regions of interest, this representation excludes irrelevant image information, facilitating the learning process. Results and conclusion - The performance of the proposed method is demonstrated for the tasks of left ventricle and fetal head segmentation in ultrasound, left atrium segmentation in Magnetic Resonance, and bladder segmentation in Computed Tomography, using different numbers of training volumes in each study. The results obtained showed the feasibility of the proposed method to segment different anatomical structures in different imaging modalities. Moreover, the results also showed that the proposed approach is less dependent on the size of the training dataset in comparison with state-of-the-art DL-based segmentation methods, outperforming them for all tasks when a low number of samples is available. Significance - Overall, by offering a more robust and less data-intensive approach to accurately segmenting anatomical structures, the proposed method has the potential to enhance clinical tasks that require image segmentation strategies.
目标——医学图像分割对于多项临床任务至关重要,包括诊断、手术及治疗规划以及图像引导介入。深度学习(DL)方法已成为多种图像分割场景的最新技术。然而,要有效训练一个DL模型需要一个大规模且标注良好的数据集,而这在临床实践中通常很难获得,尤其是对于三维图像。方法——在本文中,我们提出了深度变形模型(Deep-DM),这是一种用于三维医学成像分割的学习引导变形模型框架,使用有限的训练数据。在所提出的方法中,能量函数由卷积神经网络(CNN)学习并集成到一个显式变形模型中,以驱动初始表面朝着要分割的对象演化。具体而言,基于学习的能量函数在每次迭代时从包含演化表面周围图像信息的图像的局部解剖表示中迭代检索。通过关注局部感兴趣区域,这种表示排除了不相关的图像信息,促进了学习过程。结果与结论——在所开展的每项研究中,使用不同数量的训练体积,针对超声中的左心室和胎儿头部分割、磁共振中的左心房分割以及计算机断层扫描中的膀胱分割任务,展示了所提出方法的性能。所获得的结果表明了所提出方法在不同成像模态下分割不同解剖结构的可行性。此外,结果还表明,与基于深度学习的最先进分割方法相比,所提出的方法对训练数据集大小的依赖性较小,在可用样本数量较少时,在所有任务上均优于这些方法。意义——总体而言,通过提供一种更稳健且数据密集度更低的方法来准确分割解剖结构,所提出的方法有潜力增强需要图像分割策略的临床任务。