Spaanderman Douwe J, Starmans Martijn P A, van Erp Gonnie C M, Hanff David F, Sluijter Judith H, Schut Anne-Rose W, van Leenders Geert J L H, Verhoef Cornelis, Grünhagen Dirk J, Niessen Wiro J, Visser Jacob J, Klein Stefan
Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands.
Department of Surgical Oncology, Erasmus MC Cancer Institute, Rotterdam, The Netherlands.
Eur Radiol. 2025 May;35(5):2736-2745. doi: 10.1007/s00330-024-11167-8. Epub 2024 Nov 19.
Segmentations are crucial in medical imaging for morphological, volumetric, and radiomics biomarkers. Manual segmentation is accurate but not feasible in clinical workflow, while automatic segmentation generally performs sub-par.
To develop a minimally interactive deep learning-based segmentation method for soft-tissue tumors (STTs) on CT and MRI.
The interactive method requires the user to click six points near the tumor's extreme boundaries in the image. These six points are transformed into a distance map and serve, with the image, as input for a convolutional neural network. A multi-center public dataset with 514 patients and nine STT phenotypes in seven anatomical locations, with CT or T1-weighted MRI, was used for training and internal validation. For external validation, another public dataset was employed, which included five unseen STT phenotypes in extremities on CT, T1-weighted MRI, and T2-weighted fat-saturated (FS) MRI.
Internal validation resulted in a dice similarity coefficient (DSC) of 0.85 ± 0.11 (mean ± standard deviation) for CT and 0.84 ± 0.12 for T1-weighted MRI. External validation resulted in DSCs of 0.81 ± 0.08 for CT, 0.84 ± 0.09 for T1-weighted MRI, and 0.88 ± 0.08 for T2-weighted FS MRI. Volumetric measurements showed consistent replication with low error internally (volume: 1 ± 28 mm, r = 0.99; diameter: - 6 ± 14 mm, r = 0.90) and externally (volume: - 7 ± 23 mm, r = 0.96; diameter: - 3 ± 6 mm, r = 0.99). Interactive segmentation time was considerably shorter (CT: 364 s, T1-weighted MRI: 258s) than manual segmentation (CT: 1639s, T1-weighted MRI: 1895s).
The minimally interactive segmentation method effectively segments STT phenotypes on CT and MRI, with robust generalization to unseen phenotypes and imaging modalities.
Question Can this deep learning-based method segment soft-tissue tumors faster than can be done manually and more accurately than other automatic methods? Findings The minimally interactive segmentation method achieved accurate segmentation results in internal and external validation, and generalized well across soft-tissue tumor phenotypes and imaging modalities. Clinical relevance This minimally interactive deep learning-based segmentation method could reduce the burden of manual segmentation, facilitate the integration of imaging-based biomarkers (e.g., radiomics) into clinical practice, and provide a fast, semi-automatic solution for volume and diameter measurements (e.g., RECIST).
在医学成像中,分割对于形态学、体积和放射组学生物标志物至关重要。手动分割准确,但在临床工作流程中不可行,而自动分割通常表现不佳。
开发一种基于深度学习的最小交互分割方法,用于对CT和MRI上的软组织肿瘤(STT)进行分割。
该交互方法要求用户在图像中肿瘤的极端边界附近点击六个点。这六个点被转换为距离图,并与图像一起作为卷积神经网络的输入。使用一个多中心公共数据集进行训练和内部验证,该数据集包含514例患者以及七个解剖部位的九种STT表型,采用了CT或T1加权MRI。为进行外部验证,使用了另一个公共数据集,其中包括四肢中五种未见过的STT表型,采用了CT、T1加权MRI和T2加权脂肪饱和(FS)MRI。
内部验证中,CT的骰子相似系数(DSC)为0.85±0.11(均值±标准差),T1加权MRI为0.84±0.12。外部验证中,CT的DSC为0.81±0.08,T1加权MRI为0.84±0.09,T2加权FS MRI为0.88±0.08。体积测量显示内部(体积:1±28mm,r = 0.99;直径:-6±14mm,r = 0.90)和外部(体积:-7±23mm,r = 0.96;直径:-3±6mm,r = 0.99)均具有一致的重复性且误差较低。交互分割时间比手动分割(CT:1639秒,T1加权MRI:1895秒)短得多(CT:364秒,T1加权MRI:258秒)。
这种最小交互分割方法能有效分割CT和MRI上的STT表型,对未见过的表型和成像模态具有强大的泛化能力。
问题 这种基于深度学习的方法分割软组织肿瘤的速度能否比手动分割更快,且比其他自动方法更准确? 研究结果 最小交互分割方法在内部和外部验证中均取得了准确的分割结果,并且在软组织肿瘤表型和成像模态之间具有良好的泛化能力。 临床意义 这种基于深度学习的最小交互分割方法可以减轻手动分割的负担,促进基于成像的生物标志物(如放射组学)融入临床实践,并为体积和直径测量(如RECIST)提供快速、半自动的解决方案。