Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea.
J Magn Reson Imaging. 2024 Jun;59(6):2252-2262. doi: 10.1002/jmri.28960. Epub 2023 Aug 19.
Deep learning models require large-scale training to perform confidently, but obtaining annotated datasets in medical imaging is challenging. Weak annotation has emerged as a way to save time and effort.
To develop a deep learning model for 3D breast cancer segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using weak annotation with reliable performance.
Retrospective.
Seven hundred and thirty-six women with breast cancer from a single institution, divided into the development (N = 544) and test dataset (N = 192).
FIELD STRENGTH/SEQUENCE: 3.0-T, 3D fat-saturated gradient-echo axial T1-weighted flash 3D volumetric interpolated brain examination (VIBE) sequences.
Two radiologists performed a weak annotation of the ground truth using bounding boxes. Based on this, the ground truth annotation was completed through autonomic and manual correction. The deep learning model using 3D U-Net transformer (UNETR) was trained with this annotated dataset. The segmentation results of the test set were analyzed by quantitative and qualitative methods, and the regions were divided into whole breast and region of interest (ROI) within the bounding box.
As a quantitative method, we used the Dice similarity coefficient to evaluate the segmentation result. The volume correlation with the ground truth was evaluated with the Spearman correlation coefficient. Qualitatively, three readers independently evaluated the visual score in four scales. A P-value <0.05 was considered statistically significant.
The deep learning model we developed achieved a median Dice similarity score of 0.75 and 0.89 for the whole breast and ROI, respectively. The volume correlation coefficient with respect to the ground truth volume was 0.82 and 0.86 for the whole breast and ROI, respectively. The mean visual score, as evaluated by three readers, was 3.4.
The proposed deep learning model with weak annotation may show good performance for 3D segmentations of breast cancer using DCE-MRI.
3 TECHNICAL EFFICACY: Stage 2.
深度学习模型需要大规模训练才能自信地执行,但在医学成像中获取标注数据集具有挑战性。弱标注作为一种节省时间和精力的方法已经出现。
使用弱标注开发一种在动态对比增强磁共振成像(DCE-MRI)中进行 3D 乳腺癌分割的深度学习模型,以实现可靠的性能。
回顾性。
来自单一机构的 736 名乳腺癌女性,分为开发(N=544)和测试数据集(N=192)。
磁场强度/序列:3.0-T,3D 脂肪饱和梯度回波轴位 T1 加权快速 3D 容积内插脑检查(VIBE)序列。
两名放射科医生使用边界框对地面实况进行弱标注。在此基础上,通过自主和手动校正完成地面实况标注。使用 3D U-Net 转换器(UNETR)的深度学习模型使用这个标注数据集进行训练。通过定量和定性方法分析测试集的分割结果,并将区域分为整个乳房和边界框内的感兴趣区域(ROI)。
作为一种定量方法,我们使用 Dice 相似系数评估分割结果。与地面真实值的体积相关性使用 Spearman 相关系数进行评估。定性方面,三位读者独立在四个尺度上评估视觉评分。P 值<0.05 被认为具有统计学意义。
我们开发的深度学习模型在整个乳房和 ROI 方面的平均 Dice 相似得分分别为 0.75 和 0.89。与地面真实体积的体积相关性系数分别为 0.82 和 0.86。三位读者评估的平均视觉评分均为 3.4。
使用弱标注的提出的深度学习模型可能在使用 DCE-MRI 对乳腺癌的 3D 分割中表现出良好的性能。
3 技术功效:2 级。