Wang Cheng, Cai Zhiqiang, Zhang Yao, Yang Shuqing
School of Physics and Optoelectronic Engineering, Yangtze University, Jingzhou, China.
The First Affiliated Hospital of Yangtze University, Jingzhou, China.
Med Phys. 2025 Aug;52(8):e18049. doi: 10.1002/mp.18049.
The existing deep learning (DL) segmentation methods of COVID-19 lesions in chest CT scans require tedious manual labels for training. This is labor-intensive and time-consuming for radiologists, and also limits the application of DL methods in clinical practice.
To develop an unsupervised DL segmentation method that does not need the participation of radiologists to provide manual labels for training.
Two hundred chest 3D CT scans of 48 COVID-19 patients (including 48 baseline CT scans and 152 follow-up CT scans) were used as training set and another 65 chest 3D CT scans were used as test set. A novel self-learning segmentation method was proposed to train a DL segmentation model based on the intrinsic physiology information of the patients instead of the external professional knowledge of the radiologists. The proposed method comprised two modules: a self-annotation module and a training module. In self-annotation module, the progression of COVID-19 pneumonia and the change of lung volumes between a pair of baseline CT and follow-up CT were combined to automatically annotate the abnormal lung region (referred as self-label) in the follow-up CT. Then in training module, the follow-up CT scans and the corresponding self-labels were used to train a DL segmentation model (referred as Model-S). The COVID-19 lesions segmented by Model-S in the test set were quantitatively compared with the ground truth. To further evaluate the proposed method, the manual labels of 152 follow-up CT scans were delineated by radiologists and compared with the corresponding self-labels. Then another DL segmentation model (referred as Model-M) was trained based on these manual labels. The segmentation results between Model-S and Model-M in the test set were also compared.
As for evaluating the self-annotation results, the Dice similarity coefficient (DSC) between self-labels and manual labels in follow-up CT scans was 93.0% ± 10.5%. The lesion volumes of self-labels and manual labels had a high Spearman correlation (r = 0.99; p < 0.001). As for evaluating the self-learning segmentation results, the DSC between the lesions segmented by Model-S and ground truth, and between the lesions segmented by Model-M and ground truth were 82.1% ± 6.0% and 83.4% ± 5.2%, respectively. The lesion volumes segmented by Model-S and the ground truth had a high Spearman correlation (r = 0.99; p < 0.001). Besides, the DSC between the lesions segmented by Model-S and segmented by Model-M was 87.5% ± 4.0%. The lesion volumes segmented by Model-S and Model-M also had a high Spearman correlation (r = 0.98; p < 0.001).
The proposed unsupervised segmentation method can automatically segment COVID-19 lesions accurately in chest 3D CT scans without requiring radiologists to provide any manual labels for training in the establishment process of the method.
现有的胸部CT扫描中新冠病变的深度学习(DL)分割方法需要繁琐的手动标注来进行训练。这对放射科医生来说既费力又耗时,还限制了DL方法在临床实践中的应用。
开发一种无需放射科医生参与提供手动标注进行训练的无监督DL分割方法。
将48例新冠患者的200例胸部3D CT扫描(包括48例基线CT扫描和152例随访CT扫描)用作训练集,另外65例胸部3D CT扫描用作测试集。提出了一种新颖的自学习分割方法,基于患者的内在生理信息而非放射科医生的外部专业知识来训练DL分割模型。所提出的方法包括两个模块:自标注模块和训练模块。在自标注模块中,结合新冠肺炎的进展以及一对基线CT和随访CT之间肺容积的变化,自动标注随访CT中的异常肺区域(称为自标注)。然后在训练模块中,使用随访CT扫描和相应的自标注来训练DL分割模型(称为模型-S)。将模型-S在测试集中分割的新冠病变与真实情况进行定量比较。为了进一步评估所提出的方法,由放射科医生勾勒出152例随访CT扫描的手动标注,并与相应的自标注进行比较。然后基于这些手动标注训练另一个DL分割模型(称为模型-M)。还比较了测试集中模型-S和模型-M的分割结果。
关于评估自标注结果,随访CT扫描中自标注与手动标注之间的骰子相似系数(DSC)为93.0%±10.5%。自标注和手动标注的病变体积具有高度的斯皮尔曼相关性(r = 0.99;p < 0.001)。关于评估自学习分割结果,模型-S分割的病变与真实情况之间以及模型-M分割的病变与真实情况之间的DSC分别为82.1%±6.0%和83.4%±5.2%。模型-S分割的病变体积与真实情况具有高度的斯皮尔曼相关性(r = 0.99;p < 0.001)。此外,模型-S分割的病变与模型-M分割的病变之间的DSC为87.5%±4.0%。模型-S和模型-M分割的病变体积也具有高度的斯皮尔曼相关性(r = 0.98;p < 0.001)。
所提出的无监督分割方法能够在胸部3D CT扫描中准确自动分割新冠病变,在该方法的建立过程中无需放射科医生提供任何手动标注进行训练。