Suppr超能文献

用于磁共振图像中脑肿瘤弱监督分割的深度超像素生成与聚类

Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.

作者信息

Yoo Jay J, Namdar Khashayar, Khalvati Farzad

机构信息

Institute of Medical Science, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada.

Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, 555 University Avenue, Toronto, M5G 1X8, Ontario, Canada.

出版信息

BMC Med Imaging. 2024 Dec 18;24(1):335. doi: 10.1186/s12880-024-01523-x.

Abstract

PURPOSE

Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.

METHODS

This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).

RESULTS

We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.

CONCLUSION

The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.

摘要

目的

训练机器学习模型以分割医学图像中的肿瘤和其他异常是开发诊断工具的重要一步,但通常需要手动标注的真实分割结果,这需要大量的时间和资源。我们旨在开发一种管道,该管道可以使用易于获取的二进制图像级分类标签进行训练,以有效地分割感兴趣区域,而无需真实标注。

方法

这项工作提出使用深度超像素生成模型和深度超像素聚类模型同时进行训练,以输出弱监督的脑肿瘤分割结果。超像素生成模型的输出由超像素聚类模型进行选择和聚类。此外,我们使用二进制图像级标签(即指示图像是否包含肿瘤的标签)训练一个分类器,该分类器通过将分割不足的种子定位为损失项来指导训练。所提出的同时使用超像素生成和聚类模型以及引导定位方法,使得输出的弱监督肿瘤分割结果能够捕获在训练期间传播到两个模型的上下文信息,从而产生专门勾勒肿瘤轮廓的超像素。我们使用骰子系数和95%豪斯多夫距离(HD95)评估管道的性能,并将性能与最先进的基线进行比较。这些基线包括使用种子和超像素的最先进的弱监督分割方法(CAM-S)以及分割一切模型(SAM)。

结果

我们使用来自多模态脑肿瘤分割挑战(BraTS)2020数据集的磁共振脑扫描的2D切片以及指示肿瘤存在的标签来训练和评估管道。在来自BraTS 2023数据集的外部测试队列上,我们的方法实现了平均骰子系数为0.745,平均HD95为20.8,优于所有基线,包括CAM-S和SAM,它们的平均骰子系数分别为0.646和0.641,平均HD95分别为21.2和27.3。

结论

所提出的深度超像素生成、深度超像素聚类以及将分割不足的种子作为损失项的结合改进了弱监督分割。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0446/11657002/49d6c3dc8fb8/12880_2024_1523_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验